Apr 30 03:28:39.111252 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:28:39.111298 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:39.111316 kernel: BIOS-provided physical RAM map: Apr 30 03:28:39.111330 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 30 03:28:39.111343 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 30 03:28:39.111357 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 30 03:28:39.111373 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 30 03:28:39.111391 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 30 03:28:39.111406 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 30 03:28:39.111420 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 30 03:28:39.111434 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 30 03:28:39.111449 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 30 03:28:39.111464 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 30 03:28:39.111478 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 30 03:28:39.111499 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 30 03:28:39.111515 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 30 03:28:39.111531 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 30 03:28:39.111547 kernel: NX (Execute Disable) protection: active Apr 30 03:28:39.111562 kernel: APIC: Static calls initialized Apr 30 03:28:39.111578 kernel: efi: EFI v2.7 by EDK II Apr 30 03:28:39.111594 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Apr 30 03:28:39.111610 kernel: SMBIOS 2.4 present. Apr 30 03:28:39.111627 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 Apr 30 03:28:39.111643 kernel: Hypervisor detected: KVM Apr 30 03:28:39.111664 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:28:39.111681 kernel: kvm-clock: using sched offset of 12236025730 cycles Apr 30 03:28:39.111699 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:28:39.111716 kernel: tsc: Detected 2299.998 MHz processor Apr 30 03:28:39.111733 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:28:39.111751 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:28:39.111767 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 30 03:28:39.111784 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 30 03:28:39.111801 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:28:39.111822 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 30 03:28:39.111845 kernel: Using GB pages for direct mapping Apr 30 03:28:39.111862 kernel: Secure boot disabled Apr 30 03:28:39.111879 kernel: ACPI: Early table checksum verification disabled Apr 30 03:28:39.111895 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 30 03:28:39.111912 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 30 03:28:39.111929 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 30 03:28:39.111954 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 30 03:28:39.111974 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 30 03:28:39.111991 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Apr 30 03:28:39.112008 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 30 03:28:39.112040 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 30 03:28:39.112069 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 30 03:28:39.112088 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 30 03:28:39.112111 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 30 03:28:39.112129 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 30 03:28:39.112145 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 30 03:28:39.112218 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 30 03:28:39.112235 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 30 03:28:39.112252 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 30 03:28:39.112268 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 30 03:28:39.112284 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 30 03:28:39.112300 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 30 03:28:39.112321 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 30 03:28:39.112338 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:28:39.112354 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:28:39.112371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 30 03:28:39.112388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 30 03:28:39.112404 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 30 03:28:39.112421 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 30 03:28:39.112541 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 30 03:28:39.112562 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Apr 30 03:28:39.112594 kernel: Zone ranges: Apr 30 03:28:39.112614 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:28:39.112633 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:28:39.112652 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 30 03:28:39.112671 kernel: Movable zone start for each node Apr 30 03:28:39.112689 kernel: Early memory node ranges Apr 30 03:28:39.112708 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 30 03:28:39.112727 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 30 03:28:39.112746 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 30 03:28:39.112770 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 30 03:28:39.112790 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 30 03:28:39.112809 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 30 03:28:39.112829 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:28:39.112847 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 30 03:28:39.112867 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 30 03:28:39.112887 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 30 03:28:39.112907 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 30 03:28:39.112926 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 30 03:28:39.112951 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:28:39.112970 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:28:39.112990 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:28:39.113009 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:28:39.113047 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:28:39.113066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:28:39.113085 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:28:39.113105 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:28:39.113125 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 30 03:28:39.113151 kernel: Booting paravirtualized kernel on KVM Apr 30 03:28:39.113172 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:28:39.113190 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:28:39.113210 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:28:39.113229 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:28:39.113249 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:28:39.113268 kernel: kvm-guest: PV spinlocks enabled Apr 30 03:28:39.113288 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:28:39.113311 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:39.113338 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:28:39.113358 kernel: random: crng init done Apr 30 03:28:39.113378 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 30 03:28:39.113399 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:28:39.113438 kernel: Fallback order for Node 0: 0 Apr 30 03:28:39.113456 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 30 03:28:39.113476 kernel: Policy zone: Normal Apr 30 03:28:39.113495 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:28:39.113521 kernel: software IO TLB: area num 2. Apr 30 03:28:39.113543 kernel: Memory: 7513388K/7860584K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 346936K reserved, 0K cma-reserved) Apr 30 03:28:39.113561 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:28:39.113581 kernel: Kernel/User page tables isolation: enabled Apr 30 03:28:39.113599 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:28:39.113617 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:28:39.113637 kernel: Dynamic Preempt: voluntary Apr 30 03:28:39.113657 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:28:39.113678 kernel: rcu: RCU event tracing is enabled. Apr 30 03:28:39.113719 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:28:39.113740 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:28:39.113761 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:28:39.113786 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:28:39.113806 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:28:39.113825 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:28:39.113844 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:28:39.113864 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:28:39.113883 kernel: Console: colour dummy device 80x25 Apr 30 03:28:39.113908 kernel: printk: console [ttyS0] enabled Apr 30 03:28:39.113928 kernel: ACPI: Core revision 20230628 Apr 30 03:28:39.113949 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:28:39.113970 kernel: x2apic enabled Apr 30 03:28:39.113990 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:28:39.114011 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 30 03:28:39.114050 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 30 03:28:39.114083 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 30 03:28:39.114108 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 30 03:28:39.114126 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 30 03:28:39.114145 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:28:39.114164 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 30 03:28:39.114182 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 30 03:28:39.114199 kernel: Spectre V2 : Mitigation: IBRS Apr 30 03:28:39.114218 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:28:39.114237 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:28:39.114258 kernel: RETBleed: Mitigation: IBRS Apr 30 03:28:39.114284 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:28:39.114301 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 30 03:28:39.114318 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:28:39.114336 kernel: MDS: Mitigation: Clear CPU buffers Apr 30 03:28:39.114355 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:39.114375 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:28:39.114397 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:28:39.114430 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:28:39.114450 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:28:39.114479 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 30 03:28:39.114501 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:28:39.114522 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:28:39.114543 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:28:39.114565 kernel: landlock: Up and running. Apr 30 03:28:39.114585 kernel: SELinux: Initializing. Apr 30 03:28:39.114605 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:39.114626 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:39.114647 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 30 03:28:39.114673 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:39.114694 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:39.114714 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:39.114736 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 30 03:28:39.114756 kernel: signal: max sigframe size: 1776 Apr 30 03:28:39.114777 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:28:39.114798 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:28:39.114819 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:28:39.114840 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:28:39.114866 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:28:39.114886 kernel: .... node #0, CPUs: #1 Apr 30 03:28:39.114908 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 30 03:28:39.114927 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:28:39.114945 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:28:39.114965 kernel: smpboot: Max logical packages: 1 Apr 30 03:28:39.114986 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 30 03:28:39.115006 kernel: devtmpfs: initialized Apr 30 03:28:39.115064 kernel: x86/mm: Memory block size: 128MB Apr 30 03:28:39.115085 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 30 03:28:39.115106 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:28:39.115126 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:28:39.115146 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:28:39.115168 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:28:39.115188 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:28:39.115208 kernel: audit: type=2000 audit(1745983718.119:1): state=initialized audit_enabled=0 res=1 Apr 30 03:28:39.115228 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:28:39.115256 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:28:39.115276 kernel: cpuidle: using governor menu Apr 30 03:28:39.115297 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:28:39.115316 kernel: dca service started, version 1.12.1 Apr 30 03:28:39.115337 kernel: PCI: Using configuration type 1 for base access Apr 30 03:28:39.115357 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:28:39.115378 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:28:39.115398 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:28:39.115426 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:28:39.115454 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:28:39.115473 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:28:39.115494 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:28:39.115514 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:28:39.115534 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:28:39.115554 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 30 03:28:39.115574 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:28:39.115595 kernel: ACPI: Interpreter enabled Apr 30 03:28:39.115615 kernel: ACPI: PM: (supports S0 S3 S5) Apr 30 03:28:39.115641 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:28:39.115660 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:28:39.115688 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:28:39.115707 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 30 03:28:39.115726 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:28:39.116066 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:28:39.116333 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:28:39.116580 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:28:39.116616 kernel: PCI host bridge to bus 0000:00 Apr 30 03:28:39.116840 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:28:39.117072 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:28:39.117284 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:28:39.117499 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 30 03:28:39.117706 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:28:39.117955 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:28:39.118242 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 30 03:28:39.118505 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 30 03:28:39.118741 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 30 03:28:39.118978 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 30 03:28:39.119249 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 30 03:28:39.119505 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 30 03:28:39.119761 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:28:39.119995 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 30 03:28:39.120285 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 30 03:28:39.120537 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 03:28:39.120771 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 30 03:28:39.121005 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 30 03:28:39.121062 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:28:39.121080 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:28:39.121097 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:28:39.121115 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:28:39.121133 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:28:39.121150 kernel: iommu: Default domain type: Translated Apr 30 03:28:39.121167 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:28:39.121185 kernel: efivars: Registered efivars operations Apr 30 03:28:39.121205 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:28:39.121234 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:28:39.121255 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 30 03:28:39.121275 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 30 03:28:39.121295 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 30 03:28:39.121314 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 30 03:28:39.121331 kernel: vgaarb: loaded Apr 30 03:28:39.121350 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:28:39.121368 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:28:39.121387 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:28:39.121420 kernel: pnp: PnP ACPI init Apr 30 03:28:39.121440 kernel: pnp: PnP ACPI: found 7 devices Apr 30 03:28:39.121460 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:28:39.121479 kernel: NET: Registered PF_INET protocol family Apr 30 03:28:39.121498 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:28:39.121519 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 30 03:28:39.121536 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:28:39.121558 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:28:39.121578 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:28:39.121603 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 30 03:28:39.121623 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:39.121644 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:39.121663 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:28:39.121683 kernel: NET: Registered PF_XDP protocol family Apr 30 03:28:39.121937 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:28:39.122189 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:28:39.122427 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:28:39.122648 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 30 03:28:39.122885 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:28:39.122912 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:28:39.122934 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:28:39.122955 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 30 03:28:39.122976 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:28:39.122994 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 30 03:28:39.123012 kernel: clocksource: Switched to clocksource tsc Apr 30 03:28:39.123111 kernel: Initialise system trusted keyrings Apr 30 03:28:39.123132 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 30 03:28:39.123154 kernel: Key type asymmetric registered Apr 30 03:28:39.123174 kernel: Asymmetric key parser 'x509' registered Apr 30 03:28:39.123194 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:28:39.123215 kernel: io scheduler mq-deadline registered Apr 30 03:28:39.123237 kernel: io scheduler kyber registered Apr 30 03:28:39.123258 kernel: io scheduler bfq registered Apr 30 03:28:39.123278 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:28:39.123307 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 30 03:28:39.123577 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 30 03:28:39.123605 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 30 03:28:39.123835 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 30 03:28:39.123861 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 30 03:28:39.124127 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 30 03:28:39.124155 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:28:39.124174 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:28:39.124195 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:28:39.124224 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 30 03:28:39.124245 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 30 03:28:39.124497 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 30 03:28:39.124525 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:28:39.124547 kernel: i8042: Warning: Keylock active Apr 30 03:28:39.124568 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:28:39.124591 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:28:39.124820 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 30 03:28:39.125064 kernel: rtc_cmos 00:00: registered as rtc0 Apr 30 03:28:39.125271 kernel: rtc_cmos 00:00: setting system clock to 2025-04-30T03:28:38 UTC (1745983718) Apr 30 03:28:39.125492 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 30 03:28:39.125515 kernel: intel_pstate: CPU model not supported Apr 30 03:28:39.125534 kernel: pstore: Using crash dump compression: deflate Apr 30 03:28:39.125552 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:28:39.125570 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:28:39.125589 kernel: Segment Routing with IPv6 Apr 30 03:28:39.125617 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:28:39.125637 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:28:39.125656 kernel: Key type dns_resolver registered Apr 30 03:28:39.125676 kernel: IPI shorthand broadcast: enabled Apr 30 03:28:39.125695 kernel: sched_clock: Marking stable (871005149, 155135969)->(1054731156, -28590038) Apr 30 03:28:39.125716 kernel: registered taskstats version 1 Apr 30 03:28:39.125736 kernel: Loading compiled-in X.509 certificates Apr 30 03:28:39.125756 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:28:39.125776 kernel: Key type .fscrypt registered Apr 30 03:28:39.125802 kernel: Key type fscrypt-provisioning registered Apr 30 03:28:39.125822 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:28:39.125843 kernel: ima: No architecture policies found Apr 30 03:28:39.125863 kernel: clk: Disabling unused clocks Apr 30 03:28:39.125883 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:28:39.125903 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:28:39.125923 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:28:39.125943 kernel: Run /init as init process Apr 30 03:28:39.125968 kernel: with arguments: Apr 30 03:28:39.125987 kernel: /init Apr 30 03:28:39.126005 kernel: with environment: Apr 30 03:28:39.126041 kernel: HOME=/ Apr 30 03:28:39.126073 kernel: TERM=linux Apr 30 03:28:39.126093 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:28:39.126111 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 30 03:28:39.126132 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:39.126162 systemd[1]: Detected virtualization google. Apr 30 03:28:39.126182 systemd[1]: Detected architecture x86-64. Apr 30 03:28:39.126202 systemd[1]: Running in initrd. Apr 30 03:28:39.126222 systemd[1]: No hostname configured, using default hostname. Apr 30 03:28:39.126240 systemd[1]: Hostname set to . Apr 30 03:28:39.126258 systemd[1]: Initializing machine ID from random generator. Apr 30 03:28:39.126277 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:28:39.126298 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:39.126325 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:39.126348 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:28:39.126369 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:39.126391 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:28:39.126422 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:28:39.126446 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:28:39.126468 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:28:39.126495 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:39.126517 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:39.126562 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:39.126590 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:39.126611 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:39.126633 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:39.126661 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:39.126683 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:39.126705 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:28:39.126727 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:28:39.126749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:39.126771 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:39.126793 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:39.126815 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:39.126841 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:28:39.126863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:39.126885 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:28:39.126908 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:28:39.126930 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:39.126951 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:39.127016 systemd-journald[183]: Collecting audit messages is disabled. Apr 30 03:28:39.127113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:39.127136 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:39.127158 systemd-journald[183]: Journal started Apr 30 03:28:39.127199 systemd-journald[183]: Runtime Journal (/run/log/journal/040ee8dba8ef4cbd925cc613074b560b) is 8.0M, max 148.7M, 140.7M free. Apr 30 03:28:39.137372 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:39.138198 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:39.143358 systemd-modules-load[184]: Inserted module 'overlay' Apr 30 03:28:39.144202 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:28:39.158395 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:39.162500 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:39.169277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:39.178382 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:39.195068 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:28:39.200057 kernel: Bridge firewalling registered Apr 30 03:28:39.197695 systemd-modules-load[184]: Inserted module 'br_netfilter' Apr 30 03:28:39.199751 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:39.213259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:39.215106 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:39.226244 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:39.226932 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:39.242326 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:39.252523 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:39.253100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:39.263331 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:28:39.276970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:39.295709 dracut-cmdline[216]: dracut-dracut-053 Apr 30 03:28:39.300396 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:39.328821 systemd-resolved[217]: Positive Trust Anchors: Apr 30 03:28:39.329455 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:39.329525 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:39.336357 systemd-resolved[217]: Defaulting to hostname 'linux'. Apr 30 03:28:39.339664 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:39.353445 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:39.414072 kernel: SCSI subsystem initialized Apr 30 03:28:39.425083 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:28:39.438075 kernel: iscsi: registered transport (tcp) Apr 30 03:28:39.461423 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:28:39.461507 kernel: QLogic iSCSI HBA Driver Apr 30 03:28:39.515153 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:39.523258 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:28:39.564278 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:28:39.564363 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:28:39.564391 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:28:39.610101 kernel: raid6: avx2x4 gen() 17816 MB/s Apr 30 03:28:39.627084 kernel: raid6: avx2x2 gen() 18171 MB/s Apr 30 03:28:39.644617 kernel: raid6: avx2x1 gen() 13870 MB/s Apr 30 03:28:39.644675 kernel: raid6: using algorithm avx2x2 gen() 18171 MB/s Apr 30 03:28:39.662893 kernel: raid6: .... xor() 17778 MB/s, rmw enabled Apr 30 03:28:39.662944 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:28:39.686080 kernel: xor: automatically using best checksumming function avx Apr 30 03:28:39.860066 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:28:39.873997 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:39.883292 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:39.915065 systemd-udevd[400]: Using default interface naming scheme 'v255'. Apr 30 03:28:39.922741 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:39.936342 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:28:39.969980 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Apr 30 03:28:40.008263 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:40.023261 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:40.103643 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:40.115532 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:28:40.157758 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:40.170402 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:40.175209 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:40.179202 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:40.187252 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:28:40.229804 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:40.238724 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:28:40.257194 kernel: scsi host0: Virtio SCSI HBA Apr 30 03:28:40.267419 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 30 03:28:40.327539 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:40.332226 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:28:40.332273 kernel: AES CTR mode by8 optimization enabled Apr 30 03:28:40.329551 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:40.339076 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:40.343384 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:40.343629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:40.346860 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:40.375251 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Apr 30 03:28:40.396697 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 30 03:28:40.396960 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 30 03:28:40.397232 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 30 03:28:40.397482 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 03:28:40.397717 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:28:40.397745 kernel: GPT:17805311 != 25165823 Apr 30 03:28:40.397769 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:28:40.397794 kernel: GPT:17805311 != 25165823 Apr 30 03:28:40.397818 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:28:40.397842 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:40.397867 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 30 03:28:40.376479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:40.411537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:40.424276 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:40.453976 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (445) Apr 30 03:28:40.463093 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (452) Apr 30 03:28:40.488375 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 30 03:28:40.498136 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:40.506213 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 30 03:28:40.513467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 30 03:28:40.519618 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 30 03:28:40.519781 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 30 03:28:40.534283 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:28:40.557927 disk-uuid[550]: Primary Header is updated. Apr 30 03:28:40.557927 disk-uuid[550]: Secondary Entries is updated. Apr 30 03:28:40.557927 disk-uuid[550]: Secondary Header is updated. Apr 30 03:28:40.571238 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:40.580068 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:40.590322 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:41.605052 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:41.605136 disk-uuid[551]: The operation has completed successfully. Apr 30 03:28:41.684168 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:28:41.684317 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:28:41.718277 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:28:41.748046 sh[569]: Success Apr 30 03:28:41.772235 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:28:41.864061 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:28:41.871894 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:28:41.895645 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:28:41.944536 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:28:41.944625 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:41.944650 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:28:41.953986 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:28:41.966569 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:28:41.996160 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:28:42.000153 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:28:42.001128 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:28:42.006273 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:28:42.069262 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:42.069317 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:42.069468 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:42.036291 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:28:42.120244 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:28:42.120296 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:42.120322 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:42.111389 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:28:42.126948 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:28:42.156385 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:28:42.252119 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:42.261868 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:42.357906 systemd-networkd[752]: lo: Link UP Apr 30 03:28:42.358513 systemd-networkd[752]: lo: Gained carrier Apr 30 03:28:42.368061 ignition[667]: Ignition 2.19.0 Apr 30 03:28:42.361111 systemd-networkd[752]: Enumeration completed Apr 30 03:28:42.368073 ignition[667]: Stage: fetch-offline Apr 30 03:28:42.361828 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:42.368145 ignition[667]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:42.361835 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:42.368163 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:42.362221 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:42.368357 ignition[667]: parsed url from cmdline: "" Apr 30 03:28:42.364891 systemd-networkd[752]: eth0: Link UP Apr 30 03:28:42.368364 ignition[667]: no config URL provided Apr 30 03:28:42.364900 systemd-networkd[752]: eth0: Gained carrier Apr 30 03:28:42.368374 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:42.364918 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:42.368390 ignition[667]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:42.374150 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.99/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 30 03:28:42.368402 ignition[667]: failed to fetch config: resource requires networking Apr 30 03:28:42.383624 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:42.368711 ignition[667]: Ignition finished successfully Apr 30 03:28:42.401079 systemd[1]: Reached target network.target - Network. Apr 30 03:28:42.442992 ignition[760]: Ignition 2.19.0 Apr 30 03:28:42.413312 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:28:42.443002 ignition[760]: Stage: fetch Apr 30 03:28:42.454168 unknown[760]: fetched base config from "system" Apr 30 03:28:42.443246 ignition[760]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:42.454184 unknown[760]: fetched base config from "system" Apr 30 03:28:42.443259 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:42.454195 unknown[760]: fetched user config from "gcp" Apr 30 03:28:42.443383 ignition[760]: parsed url from cmdline: "" Apr 30 03:28:42.457619 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:28:42.443390 ignition[760]: no config URL provided Apr 30 03:28:42.475304 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:28:42.443399 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:42.517840 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:28:42.443411 ignition[760]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:42.550270 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:28:42.443433 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 30 03:28:42.603596 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:28:42.447984 ignition[760]: GET result: OK Apr 30 03:28:42.623442 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:42.448102 ignition[760]: parsing config with SHA512: 57ecac66891f3fbfe06d9aee624c46b24a2d2f1db16bd851ef26f625c175f515f1ba40a31ff80881f260d74ad93a3e0df09a295287af0ab48ec2c6fb5a0bacb2 Apr 30 03:28:42.641391 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:42.454795 ignition[760]: fetch: fetch complete Apr 30 03:28:42.660259 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:42.454805 ignition[760]: fetch: fetch passed Apr 30 03:28:42.676389 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:42.454876 ignition[760]: Ignition finished successfully Apr 30 03:28:42.686450 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:42.508512 ignition[765]: Ignition 2.19.0 Apr 30 03:28:42.721330 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:28:42.508523 ignition[765]: Stage: kargs Apr 30 03:28:42.508716 ignition[765]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:42.508728 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:42.509723 ignition[765]: kargs: kargs passed Apr 30 03:28:42.509782 ignition[765]: Ignition finished successfully Apr 30 03:28:42.601223 ignition[772]: Ignition 2.19.0 Apr 30 03:28:42.601233 ignition[772]: Stage: disks Apr 30 03:28:42.601453 ignition[772]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:42.601466 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:42.602431 ignition[772]: disks: disks passed Apr 30 03:28:42.602491 ignition[772]: Ignition finished successfully Apr 30 03:28:42.773478 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 03:28:42.960186 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:28:42.965211 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:28:43.115444 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:28:43.116411 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:28:43.117298 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:43.147175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:43.157167 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:28:43.182781 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 03:28:43.240236 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Apr 30 03:28:43.240286 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:43.240312 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:43.240336 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:43.182892 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:28:43.271292 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:28:43.271343 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:43.182934 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:43.251651 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:28:43.281282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:43.304340 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:28:43.431566 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:28:43.442617 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:28:43.454105 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:28:43.465175 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:28:43.468426 systemd-networkd[752]: eth0: Gained IPv6LL Apr 30 03:28:43.604983 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:43.610194 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:28:43.635307 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:28:43.659364 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:43.668578 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:28:43.718568 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:28:43.728273 ignition[900]: INFO : Ignition 2.19.0 Apr 30 03:28:43.728273 ignition[900]: INFO : Stage: mount Apr 30 03:28:43.728273 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:43.728273 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:43.728273 ignition[900]: INFO : mount: mount passed Apr 30 03:28:43.728273 ignition[900]: INFO : Ignition finished successfully Apr 30 03:28:43.738730 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:28:43.752214 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:28:44.123329 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:44.170087 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (913) Apr 30 03:28:44.188278 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:44.188373 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:44.188400 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:44.211020 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:28:44.211114 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:44.214718 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:44.257341 ignition[930]: INFO : Ignition 2.19.0 Apr 30 03:28:44.257341 ignition[930]: INFO : Stage: files Apr 30 03:28:44.273213 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:44.273213 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:44.273213 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:28:44.273213 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:28:44.273213 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:28:44.273213 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:28:44.273213 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:28:44.273213 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:28:44.271160 unknown[930]: wrote ssh authorized keys file for user: core Apr 30 03:28:44.376256 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:44.376256 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:44.410242 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:28:44.713966 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:28:44.966390 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:28:45.513248 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:45.513248 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:45.553234 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:45.553234 ignition[930]: INFO : files: files passed Apr 30 03:28:45.553234 ignition[930]: INFO : Ignition finished successfully Apr 30 03:28:45.517784 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:28:45.549366 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:28:45.554412 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:28:45.673651 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:28:45.764224 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:45.764224 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:45.673785 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:28:45.802287 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:45.693659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:45.713526 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:28:45.739295 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:28:45.819532 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:28:45.819649 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:28:45.839105 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:28:45.858349 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:28:45.879418 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:28:45.886275 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:28:45.952266 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:45.972279 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:28:46.019515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:46.039397 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:46.039879 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:28:46.067419 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:28:46.067646 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:46.094519 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:28:46.115481 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:28:46.133414 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:28:46.151476 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:46.172402 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:46.193564 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:28:46.213406 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:46.234483 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:28:46.255484 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:28:46.275394 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:28:46.293432 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:28:46.293642 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:46.318493 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:46.338409 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:46.360427 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:28:46.360635 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:46.382378 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:28:46.382752 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:46.413450 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:28:46.413723 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:46.433514 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:28:46.433717 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:28:46.501238 ignition[982]: INFO : Ignition 2.19.0 Apr 30 03:28:46.501238 ignition[982]: INFO : Stage: umount Apr 30 03:28:46.501238 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:46.501238 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:46.501238 ignition[982]: INFO : umount: umount passed Apr 30 03:28:46.501238 ignition[982]: INFO : Ignition finished successfully Apr 30 03:28:46.460314 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:28:46.497420 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:28:46.509202 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:28:46.509580 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:46.533631 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:28:46.533859 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:46.575955 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:28:46.576958 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:28:46.577098 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:28:46.591941 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:28:46.592077 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:28:46.613936 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:28:46.614071 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:28:46.631965 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:28:46.632081 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:28:46.650402 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:28:46.650483 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:28:46.660501 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:28:46.660580 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:28:46.677451 systemd[1]: Stopped target network.target - Network. Apr 30 03:28:46.702382 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:28:46.702599 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:46.710542 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:28:46.737245 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:28:46.741224 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:46.756246 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:28:46.756402 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:28:46.782307 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:28:46.782413 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:46.801300 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:28:46.801388 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:46.809441 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:28:46.809526 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:28:46.827524 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:28:46.827615 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:46.844485 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:28:46.844562 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:46.861791 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:28:46.867168 systemd-networkd[752]: eth0: DHCPv6 lease lost Apr 30 03:28:46.888450 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:28:46.915714 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:28:46.915878 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:28:46.935078 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:28:46.935358 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:28:46.954165 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:28:46.954228 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:46.969192 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:28:46.998190 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:28:46.998416 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:47.008719 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:28:47.008800 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:47.041468 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:28:47.041562 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:47.059391 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:28:47.059477 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:47.080550 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:47.108809 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:28:47.529213 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Apr 30 03:28:47.108990 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:47.135423 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:28:47.135528 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:47.146507 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:28:47.146563 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:47.173389 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:28:47.173483 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:47.203552 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:28:47.203637 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:47.240315 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:47.240426 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:47.275316 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:28:47.287366 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:28:47.287453 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:47.315420 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:28:47.315536 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:47.336401 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:28:47.336511 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:47.355400 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:47.355491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:47.384017 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:28:47.384217 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:28:47.393765 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:28:47.393885 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:28:47.415021 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:28:47.438370 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:28:47.482584 systemd[1]: Switching root. Apr 30 03:28:47.801203 systemd-journald[183]: Journal stopped Apr 30 03:28:39.111252 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:28:39.111298 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:39.111316 kernel: BIOS-provided physical RAM map: Apr 30 03:28:39.111330 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 30 03:28:39.111343 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 30 03:28:39.111357 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 30 03:28:39.111373 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 30 03:28:39.111391 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 30 03:28:39.111406 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 30 03:28:39.111420 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 30 03:28:39.111434 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 30 03:28:39.111449 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 30 03:28:39.111464 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 30 03:28:39.111478 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 30 03:28:39.111499 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 30 03:28:39.111515 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 30 03:28:39.111531 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 30 03:28:39.111547 kernel: NX (Execute Disable) protection: active Apr 30 03:28:39.111562 kernel: APIC: Static calls initialized Apr 30 03:28:39.111578 kernel: efi: EFI v2.7 by EDK II Apr 30 03:28:39.111594 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Apr 30 03:28:39.111610 kernel: SMBIOS 2.4 present. Apr 30 03:28:39.111627 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 Apr 30 03:28:39.111643 kernel: Hypervisor detected: KVM Apr 30 03:28:39.111664 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:28:39.111681 kernel: kvm-clock: using sched offset of 12236025730 cycles Apr 30 03:28:39.111699 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:28:39.111716 kernel: tsc: Detected 2299.998 MHz processor Apr 30 03:28:39.111733 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:28:39.111751 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:28:39.111767 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 30 03:28:39.111784 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 30 03:28:39.111801 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:28:39.111822 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 30 03:28:39.111845 kernel: Using GB pages for direct mapping Apr 30 03:28:39.111862 kernel: Secure boot disabled Apr 30 03:28:39.111879 kernel: ACPI: Early table checksum verification disabled Apr 30 03:28:39.111895 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 30 03:28:39.111912 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 30 03:28:39.111929 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 30 03:28:39.111954 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 30 03:28:39.111974 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 30 03:28:39.111991 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Apr 30 03:28:39.112008 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 30 03:28:39.112040 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 30 03:28:39.112069 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 30 03:28:39.112088 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 30 03:28:39.112111 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 30 03:28:39.112129 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 30 03:28:39.112145 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 30 03:28:39.112218 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 30 03:28:39.112235 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 30 03:28:39.112252 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 30 03:28:39.112268 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 30 03:28:39.112284 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 30 03:28:39.112300 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 30 03:28:39.112321 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 30 03:28:39.112338 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:28:39.112354 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:28:39.112371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 30 03:28:39.112388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 30 03:28:39.112404 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 30 03:28:39.112421 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 30 03:28:39.112541 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 30 03:28:39.112562 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Apr 30 03:28:39.112594 kernel: Zone ranges: Apr 30 03:28:39.112614 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:28:39.112633 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:28:39.112652 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 30 03:28:39.112671 kernel: Movable zone start for each node Apr 30 03:28:39.112689 kernel: Early memory node ranges Apr 30 03:28:39.112708 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 30 03:28:39.112727 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 30 03:28:39.112746 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 30 03:28:39.112770 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 30 03:28:39.112790 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 30 03:28:39.112809 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 30 03:28:39.112829 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:28:39.112847 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 30 03:28:39.112867 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 30 03:28:39.112887 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 30 03:28:39.112907 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 30 03:28:39.112926 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 30 03:28:39.112951 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:28:39.112970 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:28:39.112990 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:28:39.113009 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:28:39.113047 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:28:39.113066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:28:39.113085 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:28:39.113105 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:28:39.113125 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 30 03:28:39.113151 kernel: Booting paravirtualized kernel on KVM Apr 30 03:28:39.113172 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:28:39.113190 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:28:39.113210 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:28:39.113229 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:28:39.113249 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:28:39.113268 kernel: kvm-guest: PV spinlocks enabled Apr 30 03:28:39.113288 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:28:39.113311 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:39.113338 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:28:39.113358 kernel: random: crng init done Apr 30 03:28:39.113378 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 30 03:28:39.113399 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:28:39.113438 kernel: Fallback order for Node 0: 0 Apr 30 03:28:39.113456 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 30 03:28:39.113476 kernel: Policy zone: Normal Apr 30 03:28:39.113495 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:28:39.113521 kernel: software IO TLB: area num 2. Apr 30 03:28:39.113543 kernel: Memory: 7513388K/7860584K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 346936K reserved, 0K cma-reserved) Apr 30 03:28:39.113561 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:28:39.113581 kernel: Kernel/User page tables isolation: enabled Apr 30 03:28:39.113599 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:28:39.113617 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:28:39.113637 kernel: Dynamic Preempt: voluntary Apr 30 03:28:39.113657 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:28:39.113678 kernel: rcu: RCU event tracing is enabled. Apr 30 03:28:39.113719 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:28:39.113740 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:28:39.113761 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:28:39.113786 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:28:39.113806 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:28:39.113825 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:28:39.113844 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:28:39.113864 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:28:39.113883 kernel: Console: colour dummy device 80x25 Apr 30 03:28:39.113908 kernel: printk: console [ttyS0] enabled Apr 30 03:28:39.113928 kernel: ACPI: Core revision 20230628 Apr 30 03:28:39.113949 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:28:39.113970 kernel: x2apic enabled Apr 30 03:28:39.113990 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:28:39.114011 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 30 03:28:39.114050 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 30 03:28:39.114083 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 30 03:28:39.114108 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 30 03:28:39.114126 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 30 03:28:39.114145 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:28:39.114164 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 30 03:28:39.114182 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 30 03:28:39.114199 kernel: Spectre V2 : Mitigation: IBRS Apr 30 03:28:39.114218 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:28:39.114237 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:28:39.114258 kernel: RETBleed: Mitigation: IBRS Apr 30 03:28:39.114284 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:28:39.114301 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 30 03:28:39.114318 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:28:39.114336 kernel: MDS: Mitigation: Clear CPU buffers Apr 30 03:28:39.114355 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:39.114375 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:28:39.114397 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:28:39.114430 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:28:39.114450 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:28:39.114479 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 30 03:28:39.114501 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:28:39.114522 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:28:39.114543 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:28:39.114565 kernel: landlock: Up and running. Apr 30 03:28:39.114585 kernel: SELinux: Initializing. Apr 30 03:28:39.114605 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:39.114626 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:39.114647 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 30 03:28:39.114673 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:39.114694 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:39.114714 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:39.114736 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 30 03:28:39.114756 kernel: signal: max sigframe size: 1776 Apr 30 03:28:39.114777 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:28:39.114798 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:28:39.114819 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:28:39.114840 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:28:39.114866 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:28:39.114886 kernel: .... node #0, CPUs: #1 Apr 30 03:28:39.114908 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 30 03:28:39.114927 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:28:39.114945 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:28:39.114965 kernel: smpboot: Max logical packages: 1 Apr 30 03:28:39.114986 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 30 03:28:39.115006 kernel: devtmpfs: initialized Apr 30 03:28:39.115064 kernel: x86/mm: Memory block size: 128MB Apr 30 03:28:39.115085 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 30 03:28:39.115106 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:28:39.115126 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:28:39.115146 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:28:39.115168 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:28:39.115188 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:28:39.115208 kernel: audit: type=2000 audit(1745983718.119:1): state=initialized audit_enabled=0 res=1 Apr 30 03:28:39.115228 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:28:39.115256 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:28:39.115276 kernel: cpuidle: using governor menu Apr 30 03:28:39.115297 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:28:39.115316 kernel: dca service started, version 1.12.1 Apr 30 03:28:39.115337 kernel: PCI: Using configuration type 1 for base access Apr 30 03:28:39.115357 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:28:39.115378 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:28:39.115398 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:28:39.115426 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:28:39.115454 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:28:39.115473 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:28:39.115494 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:28:39.115514 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:28:39.115534 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:28:39.115554 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 30 03:28:39.115574 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:28:39.115595 kernel: ACPI: Interpreter enabled Apr 30 03:28:39.115615 kernel: ACPI: PM: (supports S0 S3 S5) Apr 30 03:28:39.115641 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:28:39.115660 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:28:39.115688 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:28:39.115707 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 30 03:28:39.115726 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:28:39.116066 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:28:39.116333 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:28:39.116580 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:28:39.116616 kernel: PCI host bridge to bus 0000:00 Apr 30 03:28:39.116840 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:28:39.117072 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:28:39.117284 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:28:39.117499 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 30 03:28:39.117706 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:28:39.117955 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:28:39.118242 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 30 03:28:39.118505 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 30 03:28:39.118741 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 30 03:28:39.118978 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 30 03:28:39.119249 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 30 03:28:39.119505 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 30 03:28:39.119761 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:28:39.119995 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 30 03:28:39.120285 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 30 03:28:39.120537 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 03:28:39.120771 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 30 03:28:39.121005 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 30 03:28:39.121062 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:28:39.121080 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:28:39.121097 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:28:39.121115 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:28:39.121133 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:28:39.121150 kernel: iommu: Default domain type: Translated Apr 30 03:28:39.121167 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:28:39.121185 kernel: efivars: Registered efivars operations Apr 30 03:28:39.121205 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:28:39.121234 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:28:39.121255 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 30 03:28:39.121275 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 30 03:28:39.121295 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 30 03:28:39.121314 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 30 03:28:39.121331 kernel: vgaarb: loaded Apr 30 03:28:39.121350 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:28:39.121368 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:28:39.121387 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:28:39.121420 kernel: pnp: PnP ACPI init Apr 30 03:28:39.121440 kernel: pnp: PnP ACPI: found 7 devices Apr 30 03:28:39.121460 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:28:39.121479 kernel: NET: Registered PF_INET protocol family Apr 30 03:28:39.121498 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:28:39.121519 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 30 03:28:39.121536 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:28:39.121558 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:28:39.121578 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:28:39.121603 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 30 03:28:39.121623 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:39.121644 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 03:28:39.121663 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:28:39.121683 kernel: NET: Registered PF_XDP protocol family Apr 30 03:28:39.121937 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:28:39.122189 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:28:39.122427 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:28:39.122648 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 30 03:28:39.122885 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:28:39.122912 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:28:39.122934 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:28:39.122955 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 30 03:28:39.122976 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:28:39.122994 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 30 03:28:39.123012 kernel: clocksource: Switched to clocksource tsc Apr 30 03:28:39.123111 kernel: Initialise system trusted keyrings Apr 30 03:28:39.123132 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 30 03:28:39.123154 kernel: Key type asymmetric registered Apr 30 03:28:39.123174 kernel: Asymmetric key parser 'x509' registered Apr 30 03:28:39.123194 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:28:39.123215 kernel: io scheduler mq-deadline registered Apr 30 03:28:39.123237 kernel: io scheduler kyber registered Apr 30 03:28:39.123258 kernel: io scheduler bfq registered Apr 30 03:28:39.123278 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:28:39.123307 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 30 03:28:39.123577 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 30 03:28:39.123605 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 30 03:28:39.123835 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 30 03:28:39.123861 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 30 03:28:39.124127 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 30 03:28:39.124155 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:28:39.124174 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:28:39.124195 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:28:39.124224 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 30 03:28:39.124245 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 30 03:28:39.124497 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 30 03:28:39.124525 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:28:39.124547 kernel: i8042: Warning: Keylock active Apr 30 03:28:39.124568 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:28:39.124591 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:28:39.124820 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 30 03:28:39.125064 kernel: rtc_cmos 00:00: registered as rtc0 Apr 30 03:28:39.125271 kernel: rtc_cmos 00:00: setting system clock to 2025-04-30T03:28:38 UTC (1745983718) Apr 30 03:28:39.125492 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 30 03:28:39.125515 kernel: intel_pstate: CPU model not supported Apr 30 03:28:39.125534 kernel: pstore: Using crash dump compression: deflate Apr 30 03:28:39.125552 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:28:39.125570 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:28:39.125589 kernel: Segment Routing with IPv6 Apr 30 03:28:39.125617 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:28:39.125637 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:28:39.125656 kernel: Key type dns_resolver registered Apr 30 03:28:39.125676 kernel: IPI shorthand broadcast: enabled Apr 30 03:28:39.125695 kernel: sched_clock: Marking stable (871005149, 155135969)->(1054731156, -28590038) Apr 30 03:28:39.125716 kernel: registered taskstats version 1 Apr 30 03:28:39.125736 kernel: Loading compiled-in X.509 certificates Apr 30 03:28:39.125756 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:28:39.125776 kernel: Key type .fscrypt registered Apr 30 03:28:39.125802 kernel: Key type fscrypt-provisioning registered Apr 30 03:28:39.125822 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:28:39.125843 kernel: ima: No architecture policies found Apr 30 03:28:39.125863 kernel: clk: Disabling unused clocks Apr 30 03:28:39.125883 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:28:39.125903 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:28:39.125923 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:28:39.125943 kernel: Run /init as init process Apr 30 03:28:39.125968 kernel: with arguments: Apr 30 03:28:39.125987 kernel: /init Apr 30 03:28:39.126005 kernel: with environment: Apr 30 03:28:39.126041 kernel: HOME=/ Apr 30 03:28:39.126073 kernel: TERM=linux Apr 30 03:28:39.126093 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:28:39.126111 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 30 03:28:39.126132 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:39.126162 systemd[1]: Detected virtualization google. Apr 30 03:28:39.126182 systemd[1]: Detected architecture x86-64. Apr 30 03:28:39.126202 systemd[1]: Running in initrd. Apr 30 03:28:39.126222 systemd[1]: No hostname configured, using default hostname. Apr 30 03:28:39.126240 systemd[1]: Hostname set to . Apr 30 03:28:39.126258 systemd[1]: Initializing machine ID from random generator. Apr 30 03:28:39.126277 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:28:39.126298 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:39.126325 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:39.126348 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:28:39.126369 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:39.126391 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:28:39.126422 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:28:39.126446 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:28:39.126468 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:28:39.126495 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:39.126517 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:39.126562 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:39.126590 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:39.126611 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:39.126633 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:39.126661 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:39.126683 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:39.126705 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:28:39.126727 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:28:39.126749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:39.126771 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:39.126793 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:39.126815 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:39.126841 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:28:39.126863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:39.126885 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:28:39.126908 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:28:39.126930 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:39.126951 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:39.127016 systemd-journald[183]: Collecting audit messages is disabled. Apr 30 03:28:39.127113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:39.127136 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:39.127158 systemd-journald[183]: Journal started Apr 30 03:28:39.127199 systemd-journald[183]: Runtime Journal (/run/log/journal/040ee8dba8ef4cbd925cc613074b560b) is 8.0M, max 148.7M, 140.7M free. Apr 30 03:28:39.137372 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:39.138198 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:39.143358 systemd-modules-load[184]: Inserted module 'overlay' Apr 30 03:28:39.144202 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:28:39.158395 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:39.162500 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:39.169277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:39.178382 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:39.195068 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:28:39.200057 kernel: Bridge firewalling registered Apr 30 03:28:39.197695 systemd-modules-load[184]: Inserted module 'br_netfilter' Apr 30 03:28:39.199751 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:39.213259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:39.215106 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:39.226244 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:39.226932 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:39.242326 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:39.252523 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:39.253100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:39.263331 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:28:39.276970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:39.295709 dracut-cmdline[216]: dracut-dracut-053 Apr 30 03:28:39.300396 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:39.328821 systemd-resolved[217]: Positive Trust Anchors: Apr 30 03:28:39.329455 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:39.329525 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:39.336357 systemd-resolved[217]: Defaulting to hostname 'linux'. Apr 30 03:28:39.339664 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:39.353445 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:39.414072 kernel: SCSI subsystem initialized Apr 30 03:28:39.425083 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:28:39.438075 kernel: iscsi: registered transport (tcp) Apr 30 03:28:39.461423 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:28:39.461507 kernel: QLogic iSCSI HBA Driver Apr 30 03:28:39.515153 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:39.523258 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:28:39.564278 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:28:39.564363 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:28:39.564391 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:28:39.610101 kernel: raid6: avx2x4 gen() 17816 MB/s Apr 30 03:28:39.627084 kernel: raid6: avx2x2 gen() 18171 MB/s Apr 30 03:28:39.644617 kernel: raid6: avx2x1 gen() 13870 MB/s Apr 30 03:28:39.644675 kernel: raid6: using algorithm avx2x2 gen() 18171 MB/s Apr 30 03:28:39.662893 kernel: raid6: .... xor() 17778 MB/s, rmw enabled Apr 30 03:28:39.662944 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:28:39.686080 kernel: xor: automatically using best checksumming function avx Apr 30 03:28:39.860066 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:28:39.873997 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:39.883292 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:39.915065 systemd-udevd[400]: Using default interface naming scheme 'v255'. Apr 30 03:28:39.922741 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:39.936342 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:28:39.969980 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Apr 30 03:28:40.008263 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:40.023261 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:40.103643 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:40.115532 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:28:40.157758 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:40.170402 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:40.175209 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:40.179202 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:40.187252 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:28:40.229804 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:40.238724 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:28:40.257194 kernel: scsi host0: Virtio SCSI HBA Apr 30 03:28:40.267419 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 30 03:28:40.327539 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:40.332226 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:28:40.332273 kernel: AES CTR mode by8 optimization enabled Apr 30 03:28:40.329551 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:40.339076 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:40.343384 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:40.343629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:40.346860 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:40.375251 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Apr 30 03:28:40.396697 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 30 03:28:40.396960 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 30 03:28:40.397232 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 30 03:28:40.397482 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 03:28:40.397717 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:28:40.397745 kernel: GPT:17805311 != 25165823 Apr 30 03:28:40.397769 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:28:40.397794 kernel: GPT:17805311 != 25165823 Apr 30 03:28:40.397818 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:28:40.397842 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:40.397867 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 30 03:28:40.376479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:40.411537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:40.424276 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:40.453976 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (445) Apr 30 03:28:40.463093 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (452) Apr 30 03:28:40.488375 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 30 03:28:40.498136 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:40.506213 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 30 03:28:40.513467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 30 03:28:40.519618 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 30 03:28:40.519781 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 30 03:28:40.534283 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:28:40.557927 disk-uuid[550]: Primary Header is updated. Apr 30 03:28:40.557927 disk-uuid[550]: Secondary Entries is updated. Apr 30 03:28:40.557927 disk-uuid[550]: Secondary Header is updated. Apr 30 03:28:40.571238 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:40.580068 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:40.590322 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:41.605052 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:28:41.605136 disk-uuid[551]: The operation has completed successfully. Apr 30 03:28:41.684168 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:28:41.684317 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:28:41.718277 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:28:41.748046 sh[569]: Success Apr 30 03:28:41.772235 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:28:41.864061 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:28:41.871894 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:28:41.895645 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:28:41.944536 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:28:41.944625 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:41.944650 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:28:41.953986 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:28:41.966569 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:28:41.996160 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:28:42.000153 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:28:42.001128 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:28:42.006273 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:28:42.069262 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:42.069317 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:42.069468 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:42.036291 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:28:42.120244 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:28:42.120296 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:42.120322 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:42.111389 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:28:42.126948 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:28:42.156385 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:28:42.252119 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:42.261868 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:42.357906 systemd-networkd[752]: lo: Link UP Apr 30 03:28:42.358513 systemd-networkd[752]: lo: Gained carrier Apr 30 03:28:42.368061 ignition[667]: Ignition 2.19.0 Apr 30 03:28:42.361111 systemd-networkd[752]: Enumeration completed Apr 30 03:28:42.368073 ignition[667]: Stage: fetch-offline Apr 30 03:28:42.361828 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:42.368145 ignition[667]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:42.361835 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:42.368163 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:42.362221 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:42.368357 ignition[667]: parsed url from cmdline: "" Apr 30 03:28:42.364891 systemd-networkd[752]: eth0: Link UP Apr 30 03:28:42.368364 ignition[667]: no config URL provided Apr 30 03:28:42.364900 systemd-networkd[752]: eth0: Gained carrier Apr 30 03:28:42.368374 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:42.364918 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:42.368390 ignition[667]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:42.374150 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.99/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 30 03:28:42.368402 ignition[667]: failed to fetch config: resource requires networking Apr 30 03:28:42.383624 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:42.368711 ignition[667]: Ignition finished successfully Apr 30 03:28:42.401079 systemd[1]: Reached target network.target - Network. Apr 30 03:28:42.442992 ignition[760]: Ignition 2.19.0 Apr 30 03:28:42.413312 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:28:42.443002 ignition[760]: Stage: fetch Apr 30 03:28:42.454168 unknown[760]: fetched base config from "system" Apr 30 03:28:42.443246 ignition[760]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:42.454184 unknown[760]: fetched base config from "system" Apr 30 03:28:42.443259 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:42.454195 unknown[760]: fetched user config from "gcp" Apr 30 03:28:42.443383 ignition[760]: parsed url from cmdline: "" Apr 30 03:28:42.457619 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:28:42.443390 ignition[760]: no config URL provided Apr 30 03:28:42.475304 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:28:42.443399 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:42.517840 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:28:42.443411 ignition[760]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:42.550270 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:28:42.443433 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 30 03:28:42.603596 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:28:42.447984 ignition[760]: GET result: OK Apr 30 03:28:42.623442 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:42.448102 ignition[760]: parsing config with SHA512: 57ecac66891f3fbfe06d9aee624c46b24a2d2f1db16bd851ef26f625c175f515f1ba40a31ff80881f260d74ad93a3e0df09a295287af0ab48ec2c6fb5a0bacb2 Apr 30 03:28:42.641391 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:42.454795 ignition[760]: fetch: fetch complete Apr 30 03:28:42.660259 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:42.454805 ignition[760]: fetch: fetch passed Apr 30 03:28:42.676389 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:42.454876 ignition[760]: Ignition finished successfully Apr 30 03:28:42.686450 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:42.508512 ignition[765]: Ignition 2.19.0 Apr 30 03:28:42.721330 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:28:42.508523 ignition[765]: Stage: kargs Apr 30 03:28:42.508716 ignition[765]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:42.508728 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:42.509723 ignition[765]: kargs: kargs passed Apr 30 03:28:42.509782 ignition[765]: Ignition finished successfully Apr 30 03:28:42.601223 ignition[772]: Ignition 2.19.0 Apr 30 03:28:42.601233 ignition[772]: Stage: disks Apr 30 03:28:42.601453 ignition[772]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:42.601466 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:42.602431 ignition[772]: disks: disks passed Apr 30 03:28:42.602491 ignition[772]: Ignition finished successfully Apr 30 03:28:42.773478 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 03:28:42.960186 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:28:42.965211 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:28:43.115444 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:28:43.116411 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:28:43.117298 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:43.147175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:43.157167 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:28:43.182781 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 03:28:43.240236 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Apr 30 03:28:43.240286 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:43.240312 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:43.240336 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:43.182892 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:28:43.271292 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:28:43.271343 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:43.182934 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:43.251651 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:28:43.281282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:43.304340 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:28:43.431566 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:28:43.442617 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:28:43.454105 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:28:43.465175 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:28:43.468426 systemd-networkd[752]: eth0: Gained IPv6LL Apr 30 03:28:43.604983 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:43.610194 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:28:43.635307 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:28:43.659364 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:43.668578 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:28:43.718568 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:28:43.728273 ignition[900]: INFO : Ignition 2.19.0 Apr 30 03:28:43.728273 ignition[900]: INFO : Stage: mount Apr 30 03:28:43.728273 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:43.728273 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:43.728273 ignition[900]: INFO : mount: mount passed Apr 30 03:28:43.728273 ignition[900]: INFO : Ignition finished successfully Apr 30 03:28:43.738730 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:28:43.752214 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:28:44.123329 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:44.170087 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (913) Apr 30 03:28:44.188278 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:44.188373 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:44.188400 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:28:44.211020 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:28:44.211114 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:28:44.214718 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:44.257341 ignition[930]: INFO : Ignition 2.19.0 Apr 30 03:28:44.257341 ignition[930]: INFO : Stage: files Apr 30 03:28:44.273213 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:44.273213 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:44.273213 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:28:44.273213 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:28:44.273213 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:28:44.273213 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:28:44.273213 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:28:44.273213 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:28:44.271160 unknown[930]: wrote ssh authorized keys file for user: core Apr 30 03:28:44.376256 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:44.376256 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:44.410242 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:28:44.713966 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:44.731250 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:28:44.966390 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:28:45.513248 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:45.513248 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:45.553234 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:45.553234 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:45.553234 ignition[930]: INFO : files: files passed Apr 30 03:28:45.553234 ignition[930]: INFO : Ignition finished successfully Apr 30 03:28:45.517784 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:28:45.549366 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:28:45.554412 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:28:45.673651 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:28:45.764224 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:45.764224 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:45.673785 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:28:45.802287 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:45.693659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:45.713526 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:28:45.739295 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:28:45.819532 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:28:45.819649 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:28:45.839105 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:28:45.858349 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:28:45.879418 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:28:45.886275 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:28:45.952266 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:45.972279 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:28:46.019515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:46.039397 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:46.039879 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:28:46.067419 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:28:46.067646 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:46.094519 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:28:46.115481 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:28:46.133414 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:28:46.151476 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:46.172402 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:46.193564 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:28:46.213406 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:46.234483 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:28:46.255484 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:28:46.275394 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:28:46.293432 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:28:46.293642 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:46.318493 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:46.338409 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:46.360427 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:28:46.360635 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:46.382378 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:28:46.382752 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:46.413450 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:28:46.413723 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:46.433514 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:28:46.433717 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:28:46.501238 ignition[982]: INFO : Ignition 2.19.0 Apr 30 03:28:46.501238 ignition[982]: INFO : Stage: umount Apr 30 03:28:46.501238 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:46.501238 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 30 03:28:46.501238 ignition[982]: INFO : umount: umount passed Apr 30 03:28:46.501238 ignition[982]: INFO : Ignition finished successfully Apr 30 03:28:46.460314 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:28:46.497420 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:28:46.509202 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:28:46.509580 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:46.533631 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:28:46.533859 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:46.575955 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:28:46.576958 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:28:46.577098 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:28:46.591941 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:28:46.592077 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:28:46.613936 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:28:46.614071 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:28:46.631965 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:28:46.632081 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:28:46.650402 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:28:46.650483 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:28:46.660501 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:28:46.660580 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:28:46.677451 systemd[1]: Stopped target network.target - Network. Apr 30 03:28:46.702382 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:28:46.702599 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:46.710542 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:28:46.737245 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:28:46.741224 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:46.756246 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:28:46.756402 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:28:46.782307 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:28:46.782413 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:46.801300 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:28:46.801388 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:46.809441 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:28:46.809526 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:28:46.827524 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:28:46.827615 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:46.844485 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:28:46.844562 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:46.861791 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:28:46.867168 systemd-networkd[752]: eth0: DHCPv6 lease lost Apr 30 03:28:46.888450 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:28:46.915714 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:28:46.915878 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:28:46.935078 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:28:46.935358 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:28:46.954165 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:28:46.954228 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:46.969192 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:28:46.998190 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:28:46.998416 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:47.008719 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:28:47.008800 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:47.041468 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:28:47.041562 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:47.059391 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:28:47.059477 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:47.080550 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:47.108809 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:28:47.529213 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Apr 30 03:28:47.108990 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:47.135423 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:28:47.135528 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:47.146507 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:28:47.146563 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:47.173389 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:28:47.173483 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:47.203552 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:28:47.203637 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:47.240315 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:47.240426 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:47.275316 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:28:47.287366 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:28:47.287453 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:47.315420 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:28:47.315536 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:47.336401 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:28:47.336511 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:47.355400 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:47.355491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:47.384017 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:28:47.384217 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:28:47.393765 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:28:47.393885 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:28:47.415021 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:28:47.438370 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:28:47.482584 systemd[1]: Switching root. Apr 30 03:28:47.801203 systemd-journald[183]: Journal stopped Apr 30 03:28:50.213627 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:28:50.213694 kernel: SELinux: policy capability open_perms=1 Apr 30 03:28:50.213718 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:28:50.213737 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:28:50.213755 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:28:50.213774 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:28:50.213794 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:28:50.213818 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:28:50.213837 kernel: audit: type=1403 audit(1745983728.120:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:28:50.213861 systemd[1]: Successfully loaded SELinux policy in 91.850ms. Apr 30 03:28:50.213884 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.916ms. Apr 30 03:28:50.213910 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:50.213931 systemd[1]: Detected virtualization google. Apr 30 03:28:50.213952 systemd[1]: Detected architecture x86-64. Apr 30 03:28:50.213980 systemd[1]: Detected first boot. Apr 30 03:28:50.214003 systemd[1]: Initializing machine ID from random generator. Apr 30 03:28:50.214041 zram_generator::config[1023]: No configuration found. Apr 30 03:28:50.214064 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:28:50.214085 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:28:50.214112 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:28:50.214135 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:50.214158 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:28:50.214181 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:28:50.214203 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:28:50.214227 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:28:50.214248 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:28:50.214277 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:28:50.214300 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:28:50.214321 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:28:50.214343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:50.214366 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:50.214388 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:28:50.214412 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:28:50.214434 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:28:50.214470 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:50.214493 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:28:50.214515 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:50.214537 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:28:50.214560 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:28:50.214583 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:50.214613 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:28:50.214635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:50.214658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:50.214687 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:50.214710 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:50.214732 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:28:50.214753 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:28:50.214775 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:50.214798 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:50.214820 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:50.214848 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:28:50.214872 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:28:50.214895 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:28:50.214922 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:28:50.214945 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:50.214974 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:28:50.214998 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:28:50.215022 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:28:50.215071 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:28:50.215094 systemd[1]: Reached target machines.target - Containers. Apr 30 03:28:50.215117 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:28:50.215140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:50.215164 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:50.215190 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:28:50.215210 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:50.215231 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:50.215253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:50.215274 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:28:50.215295 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:50.215318 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:28:50.215340 kernel: fuse: init (API version 7.39) Apr 30 03:28:50.215364 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:28:50.215386 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:28:50.215407 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:28:50.215432 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:28:50.215462 kernel: ACPI: bus type drm_connector registered Apr 30 03:28:50.215482 kernel: loop: module loaded Apr 30 03:28:50.215503 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:50.215527 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:50.215599 systemd-journald[1110]: Collecting audit messages is disabled. Apr 30 03:28:50.215653 systemd-journald[1110]: Journal started Apr 30 03:28:50.215700 systemd-journald[1110]: Runtime Journal (/run/log/journal/d222bcf2f0e046b1ac83ee24f26bf87d) is 8.0M, max 148.7M, 140.7M free. Apr 30 03:28:49.016620 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:28:49.040239 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 03:28:49.040824 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:28:50.238076 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:28:50.263064 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:28:50.275088 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:50.306302 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:28:50.306415 systemd[1]: Stopped verity-setup.service. Apr 30 03:28:50.331100 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:50.341121 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:50.351660 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:28:50.361505 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:28:50.371449 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:28:50.382456 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:28:50.392435 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:28:50.402412 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:28:50.412606 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:28:50.424618 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:50.436702 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:28:50.436968 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:28:50.448682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:50.448919 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:50.460658 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:50.460925 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:50.471644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:50.471885 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:50.483663 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:28:50.483903 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:28:50.494640 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:50.494881 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:50.505612 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:50.515632 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:28:50.527641 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:28:50.539633 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:50.565258 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:28:50.582261 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:28:50.605263 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:28:50.615268 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:28:50.615565 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:50.626609 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:28:50.643336 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:28:50.665363 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:28:50.675459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:50.685742 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:28:50.703668 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:28:50.716244 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:50.734459 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:28:50.741942 systemd-journald[1110]: Time spent on flushing to /var/log/journal/d222bcf2f0e046b1ac83ee24f26bf87d is 53.918ms for 928 entries. Apr 30 03:28:50.741942 systemd-journald[1110]: System Journal (/var/log/journal/d222bcf2f0e046b1ac83ee24f26bf87d) is 8.0M, max 584.8M, 576.8M free. Apr 30 03:28:50.855363 systemd-journald[1110]: Received client request to flush runtime journal. Apr 30 03:28:50.750846 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:50.761657 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:50.776395 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:28:50.795783 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:50.815305 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:28:50.831368 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:28:50.848067 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:28:50.859906 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:28:50.873419 kernel: loop0: detected capacity change from 0 to 142488 Apr 30 03:28:50.877667 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:28:50.890487 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:28:50.902703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:50.928160 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:28:50.946755 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:28:50.952168 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Apr 30 03:28:50.952332 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Apr 30 03:28:50.970783 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:28:50.959219 udevadm[1144]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 03:28:50.970600 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:50.995240 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:28:51.008378 kernel: loop1: detected capacity change from 0 to 140768 Apr 30 03:28:51.023570 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:28:51.032064 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:28:51.100066 kernel: loop2: detected capacity change from 0 to 54824 Apr 30 03:28:51.104343 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:28:51.147709 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:51.200520 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Apr 30 03:28:51.200557 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Apr 30 03:28:51.211183 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:51.226057 kernel: loop3: detected capacity change from 0 to 210664 Apr 30 03:28:51.328143 kernel: loop4: detected capacity change from 0 to 142488 Apr 30 03:28:51.380281 kernel: loop5: detected capacity change from 0 to 140768 Apr 30 03:28:51.443080 kernel: loop6: detected capacity change from 0 to 54824 Apr 30 03:28:51.492073 kernel: loop7: detected capacity change from 0 to 210664 Apr 30 03:28:51.546138 (sd-merge)[1169]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Apr 30 03:28:51.547564 (sd-merge)[1169]: Merged extensions into '/usr'. Apr 30 03:28:51.554780 systemd[1]: Reloading requested from client PID 1141 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:28:51.555252 systemd[1]: Reloading... Apr 30 03:28:51.711378 zram_generator::config[1192]: No configuration found. Apr 30 03:28:51.904068 ldconfig[1136]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:28:51.988238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:52.101343 systemd[1]: Reloading finished in 545 ms. Apr 30 03:28:52.137414 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:28:52.148140 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:28:52.168841 systemd[1]: Starting ensure-sysext.service... Apr 30 03:28:52.194699 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:52.212382 systemd[1]: Reloading requested from client PID 1235 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:28:52.212602 systemd[1]: Reloading... Apr 30 03:28:52.265631 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:28:52.266354 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:28:52.269041 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:28:52.270690 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Apr 30 03:28:52.270828 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Apr 30 03:28:52.278449 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:52.278471 systemd-tmpfiles[1236]: Skipping /boot Apr 30 03:28:52.300712 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:52.300736 systemd-tmpfiles[1236]: Skipping /boot Apr 30 03:28:52.343065 zram_generator::config[1260]: No configuration found. Apr 30 03:28:52.498470 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:52.563704 systemd[1]: Reloading finished in 350 ms. Apr 30 03:28:52.584263 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:28:52.602818 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:52.626524 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:52.649534 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:28:52.672456 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:28:52.696617 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:52.715484 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:52.738206 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:28:52.742099 augenrules[1324]: No rules Apr 30 03:28:52.748889 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:52.773277 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:52.773749 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:52.786194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:52.796724 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Apr 30 03:28:52.807322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:52.812596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:52.818208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:52.827170 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:28:52.839554 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:52.847144 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:28:52.859213 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:28:52.872701 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:52.887104 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:28:52.900078 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:52.900391 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:52.911976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:52.912251 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:52.924001 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:52.924899 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:52.935788 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:28:53.001226 systemd[1]: Finished ensure-sysext.service. Apr 30 03:28:53.015413 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:53.017336 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:53.024311 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:53.040281 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:53.061312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:53.076294 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:53.096438 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 03:28:53.105362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:53.116419 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:53.126535 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:28:53.142309 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:28:53.152198 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:28:53.152486 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:53.155986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:53.156354 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:53.167691 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:53.167950 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:53.179323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:53.179563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:53.191860 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:53.193120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:53.221214 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:28:53.229253 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 03:28:53.234014 systemd-resolved[1320]: Positive Trust Anchors: Apr 30 03:28:53.236165 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:53.236248 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:53.239320 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:28:53.254077 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 30 03:28:53.271308 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 03:28:53.257774 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:28:53.263900 systemd-resolved[1320]: Defaulting to hostname 'linux'. Apr 30 03:28:53.269934 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:53.280897 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 03:28:53.301097 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Apr 30 03:28:53.301226 kernel: ACPI: button: Sleep Button [SLPF] Apr 30 03:28:53.309874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:53.333334 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Apr 30 03:28:53.343233 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:53.343369 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:53.418519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:53.446952 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Apr 30 03:28:53.452966 systemd-networkd[1374]: lo: Link UP Apr 30 03:28:53.453964 systemd-networkd[1374]: lo: Gained carrier Apr 30 03:28:53.466404 systemd-networkd[1374]: Enumeration completed Apr 30 03:28:53.467448 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:53.469272 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:28:53.469468 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:53.469481 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:53.472725 systemd-networkd[1374]: eth0: Link UP Apr 30 03:28:53.472740 systemd-networkd[1374]: eth0: Gained carrier Apr 30 03:28:53.472776 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:53.479580 systemd[1]: Reached target network.target - Network. Apr 30 03:28:53.486076 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:28:53.490147 systemd-networkd[1374]: eth0: DHCPv4 address 10.128.0.99/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 30 03:28:53.500329 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:28:53.569063 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1356) Apr 30 03:28:53.633656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 30 03:28:53.645748 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:28:53.657680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:53.675460 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:28:53.692832 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:28:53.709408 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:53.735459 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:28:53.747849 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:28:53.760349 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:53.770276 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:53.780388 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:28:53.792336 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:28:53.803470 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:28:53.813439 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:28:53.825275 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:28:53.836250 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:28:53.836319 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:53.845263 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:53.855825 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:28:53.867964 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:28:53.881136 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:28:53.896343 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:28:53.918335 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:28:53.924737 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:53.929585 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:53.940340 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:53.949361 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:53.949413 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:53.955253 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:28:53.978808 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:28:54.000372 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:28:54.021164 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:28:54.045425 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:28:54.055199 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:28:54.066119 jq[1425]: false Apr 30 03:28:54.066315 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:28:54.085451 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 03:28:54.098419 coreos-metadata[1423]: Apr 30 03:28:54.098 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Apr 30 03:28:54.101196 coreos-metadata[1423]: Apr 30 03:28:54.101 INFO Fetch successful Apr 30 03:28:54.101196 coreos-metadata[1423]: Apr 30 03:28:54.101 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Apr 30 03:28:54.103378 coreos-metadata[1423]: Apr 30 03:28:54.103 INFO Fetch successful Apr 30 03:28:54.104652 coreos-metadata[1423]: Apr 30 03:28:54.104 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Apr 30 03:28:54.104682 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:28:54.106130 coreos-metadata[1423]: Apr 30 03:28:54.106 INFO Fetch successful Apr 30 03:28:54.106424 coreos-metadata[1423]: Apr 30 03:28:54.106 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Apr 30 03:28:54.111637 coreos-metadata[1423]: Apr 30 03:28:54.110 INFO Fetch successful Apr 30 03:28:54.126186 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:28:54.143321 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:28:54.151764 extend-filesystems[1427]: Found loop4 Apr 30 03:28:54.162977 extend-filesystems[1427]: Found loop5 Apr 30 03:28:54.162977 extend-filesystems[1427]: Found loop6 Apr 30 03:28:54.162977 extend-filesystems[1427]: Found loop7 Apr 30 03:28:54.162977 extend-filesystems[1427]: Found sda Apr 30 03:28:54.162977 extend-filesystems[1427]: Found sda1 Apr 30 03:28:54.162977 extend-filesystems[1427]: Found sda2 Apr 30 03:28:54.162977 extend-filesystems[1427]: Found sda3 Apr 30 03:28:54.162977 extend-filesystems[1427]: Found usr Apr 30 03:28:54.162977 extend-filesystems[1427]: Found sda4 Apr 30 03:28:54.162977 extend-filesystems[1427]: Found sda6 Apr 30 03:28:54.162977 extend-filesystems[1427]: Found sda7 Apr 30 03:28:54.162977 extend-filesystems[1427]: Found sda9 Apr 30 03:28:54.162977 extend-filesystems[1427]: Checking size of /dev/sda9 Apr 30 03:28:54.355622 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Apr 30 03:28:54.355688 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Apr 30 03:28:54.159538 dbus-daemon[1424]: [system] SELinux support is enabled Apr 30 03:28:54.356601 extend-filesystems[1427]: Resized partition /dev/sda9 Apr 30 03:28:54.405172 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1356) Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: ---------------------------------------------------- Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: corporation. Support and training for ntp-4 are Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: available at https://www.nwtime.org/support Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: ---------------------------------------------------- Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: proto: precision = 0.087 usec (-23) Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: basedate set to 2025-04-17 Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: gps base set to 2025-04-20 (week 2363) Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: Listen normally on 3 eth0 10.128.0.99:123 Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: Listen normally on 4 lo [::1]:123 Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: bind(21) AF_INET6 fe80::4001:aff:fe80:63%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:63%2#123 Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: failed to init interface for address fe80::4001:aff:fe80:63%2 Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: Listening on routing socket on fd #21 for interface updates Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:28:54.405250 ntpd[1431]: 30 Apr 03:28:54 ntpd[1431]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:28:54.167906 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:28:54.164494 dbus-daemon[1424]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1374 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 03:28:54.407957 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:28:54.407957 extend-filesystems[1453]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 03:28:54.407957 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 30 03:28:54.407957 extend-filesystems[1453]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Apr 30 03:28:54.181861 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Apr 30 03:28:54.179118 ntpd[1431]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:28:54.471087 extend-filesystems[1427]: Resized filesystem in /dev/sda9 Apr 30 03:28:54.182712 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:28:54.179154 ntpd[1431]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:28:54.189546 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:28:54.179169 ntpd[1431]: ---------------------------------------------------- Apr 30 03:28:54.480955 jq[1452]: true Apr 30 03:28:54.207238 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:28:54.179184 ntpd[1431]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:28:54.481610 update_engine[1447]: I20250430 03:28:54.334573 1447 main.cc:92] Flatcar Update Engine starting Apr 30 03:28:54.481610 update_engine[1447]: I20250430 03:28:54.340880 1447 update_check_scheduler.cc:74] Next update check in 2m28s Apr 30 03:28:54.236749 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:28:54.179198 ntpd[1431]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:28:54.268867 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:28:54.179212 ntpd[1431]: corporation. Support and training for ntp-4 are Apr 30 03:28:54.291680 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:28:54.179228 ntpd[1431]: available at https://www.nwtime.org/support Apr 30 03:28:54.291978 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:28:54.179241 ntpd[1431]: ---------------------------------------------------- Apr 30 03:28:54.292538 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:28:54.181535 ntpd[1431]: proto: precision = 0.087 usec (-23) Apr 30 03:28:54.292746 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:28:54.181968 ntpd[1431]: basedate set to 2025-04-17 Apr 30 03:28:54.316676 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:28:54.181989 ntpd[1431]: gps base set to 2025-04-20 (week 2363) Apr 30 03:28:54.317841 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:28:54.188451 ntpd[1431]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:28:54.369470 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:28:54.188529 ntpd[1431]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:28:54.369502 systemd-logind[1443]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 30 03:28:54.188852 ntpd[1431]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:28:54.369533 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:28:54.188921 ntpd[1431]: Listen normally on 3 eth0 10.128.0.99:123 Apr 30 03:28:54.377301 systemd-logind[1443]: New seat seat0. Apr 30 03:28:54.188983 ntpd[1431]: Listen normally on 4 lo [::1]:123 Apr 30 03:28:54.381594 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:28:54.189069 ntpd[1431]: bind(21) AF_INET6 fe80::4001:aff:fe80:63%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 03:28:54.391625 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:28:54.189099 ntpd[1431]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:63%2#123 Apr 30 03:28:54.393115 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:28:54.189118 ntpd[1431]: failed to init interface for address fe80::4001:aff:fe80:63%2 Apr 30 03:28:54.472677 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:28:54.192396 ntpd[1431]: Listening on routing socket on fd #21 for interface updates Apr 30 03:28:54.479428 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:28:54.197229 ntpd[1431]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:28:54.197269 ntpd[1431]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:28:54.458712 dbus-daemon[1424]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 03:28:54.520340 jq[1461]: true Apr 30 03:28:54.543858 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:28:54.544452 tar[1459]: linux-amd64/helm Apr 30 03:28:54.572825 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:28:54.584306 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:28:54.584596 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:28:54.584829 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:28:54.607450 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 03:28:54.618243 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:28:54.618526 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:28:54.656471 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:28:54.674543 systemd-networkd[1374]: eth0: Gained IPv6LL Apr 30 03:28:54.683984 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:28:54.699819 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:28:54.716425 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:28:54.719367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:54.737479 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:28:54.753451 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Apr 30 03:28:54.763164 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:28:54.786722 systemd[1]: Starting sshkeys.service... Apr 30 03:28:54.845790 init.sh[1499]: + '[' -e /etc/default/instance_configs.cfg.template ']' Apr 30 03:28:54.845790 init.sh[1499]: + echo -e '[InstanceSetup]\nset_host_keys = false' Apr 30 03:28:54.845790 init.sh[1499]: + /usr/bin/google_instance_setup Apr 30 03:28:54.917917 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:28:54.941569 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:28:54.961643 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:28:54.959775 dbus-daemon[1424]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 03:28:54.960350 dbus-daemon[1424]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1484 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 03:28:54.974315 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 03:28:55.003599 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 03:28:55.227234 polkitd[1515]: Started polkitd version 121 Apr 30 03:28:55.252510 polkitd[1515]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 03:28:55.267450 polkitd[1515]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 03:28:55.277317 coreos-metadata[1509]: Apr 30 03:28:55.277 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Apr 30 03:28:55.281373 polkitd[1515]: Finished loading, compiling and executing 2 rules Apr 30 03:28:55.281889 coreos-metadata[1509]: Apr 30 03:28:55.281 INFO Fetch failed with 404: resource not found Apr 30 03:28:55.281889 coreos-metadata[1509]: Apr 30 03:28:55.281 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Apr 30 03:28:55.284359 coreos-metadata[1509]: Apr 30 03:28:55.284 INFO Fetch successful Apr 30 03:28:55.284359 coreos-metadata[1509]: Apr 30 03:28:55.284 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Apr 30 03:28:55.286017 coreos-metadata[1509]: Apr 30 03:28:55.285 INFO Fetch failed with 404: resource not found Apr 30 03:28:55.286017 coreos-metadata[1509]: Apr 30 03:28:55.285 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Apr 30 03:28:55.286432 coreos-metadata[1509]: Apr 30 03:28:55.286 INFO Fetch failed with 404: resource not found Apr 30 03:28:55.286432 coreos-metadata[1509]: Apr 30 03:28:55.286 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Apr 30 03:28:55.287651 dbus-daemon[1424]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 03:28:55.287925 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 03:28:55.289053 coreos-metadata[1509]: Apr 30 03:28:55.288 INFO Fetch successful Apr 30 03:28:55.294409 polkitd[1515]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 03:28:55.298018 unknown[1509]: wrote ssh authorized keys file for user: core Apr 30 03:28:55.354487 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:28:55.399293 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:28:55.398952 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:28:55.401706 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:28:55.407803 systemd-hostnamed[1484]: Hostname set to (transient) Apr 30 03:28:55.412423 systemd-resolved[1320]: System hostname changed to 'ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal'. Apr 30 03:28:55.419713 systemd[1]: Finished sshkeys.service. Apr 30 03:28:55.509720 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:28:55.530280 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:28:55.551164 systemd[1]: Started sshd@0-10.128.0.99:22-139.178.68.195:59670.service - OpenSSH per-connection server daemon (139.178.68.195:59670). Apr 30 03:28:55.569065 containerd[1462]: time="2025-04-30T03:28:55.567686834Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:28:55.595021 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:28:55.595341 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:28:55.616495 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:28:55.668399 containerd[1462]: time="2025-04-30T03:28:55.668303355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:55.671298 containerd[1462]: time="2025-04-30T03:28:55.671240716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:55.671972 containerd[1462]: time="2025-04-30T03:28:55.671459571Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:28:55.671972 containerd[1462]: time="2025-04-30T03:28:55.671499189Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:28:55.671972 containerd[1462]: time="2025-04-30T03:28:55.671721335Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:28:55.671972 containerd[1462]: time="2025-04-30T03:28:55.671754239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:55.671972 containerd[1462]: time="2025-04-30T03:28:55.671858109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:55.671972 containerd[1462]: time="2025-04-30T03:28:55.671880073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:55.672538 containerd[1462]: time="2025-04-30T03:28:55.672507034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:55.672643 containerd[1462]: time="2025-04-30T03:28:55.672623890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:55.672761 containerd[1462]: time="2025-04-30T03:28:55.672737248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:55.673057 containerd[1462]: time="2025-04-30T03:28:55.672920515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:55.673546 containerd[1462]: time="2025-04-30T03:28:55.673271230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:55.673858 containerd[1462]: time="2025-04-30T03:28:55.673830099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:55.674358 containerd[1462]: time="2025-04-30T03:28:55.674178247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:55.674358 containerd[1462]: time="2025-04-30T03:28:55.674211959Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:28:55.674833 containerd[1462]: time="2025-04-30T03:28:55.674565920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:28:55.674833 containerd[1462]: time="2025-04-30T03:28:55.674646238Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:28:55.682432 containerd[1462]: time="2025-04-30T03:28:55.681875076Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:28:55.682432 containerd[1462]: time="2025-04-30T03:28:55.681977619Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:28:55.682432 containerd[1462]: time="2025-04-30T03:28:55.682076490Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:28:55.682432 containerd[1462]: time="2025-04-30T03:28:55.682107604Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:28:55.682432 containerd[1462]: time="2025-04-30T03:28:55.682153798Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:28:55.682432 containerd[1462]: time="2025-04-30T03:28:55.682354907Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684067880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684287968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684316492Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684338611Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684365441Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684388729Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684410610Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684434552Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684458331Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684490160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684511079Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684533991Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684590469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.687340 containerd[1462]: time="2025-04-30T03:28:55.684615228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.688051 containerd[1462]: time="2025-04-30T03:28:55.684641127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.688051 containerd[1462]: time="2025-04-30T03:28:55.684665380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.688051 containerd[1462]: time="2025-04-30T03:28:55.684687114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.688051 containerd[1462]: time="2025-04-30T03:28:55.684720289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.688051 containerd[1462]: time="2025-04-30T03:28:55.684740978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.688051 containerd[1462]: time="2025-04-30T03:28:55.684765136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.688051 containerd[1462]: time="2025-04-30T03:28:55.684788386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.688051 containerd[1462]: time="2025-04-30T03:28:55.684824848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.688051 containerd[1462]: time="2025-04-30T03:28:55.684855859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688517943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688565822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688597401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688636768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688658515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688677790Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688780774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688810183Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688912083Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688936561Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688953680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688975250Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.688993576Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:28:55.690056 containerd[1462]: time="2025-04-30T03:28:55.689011985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:28:55.690751 containerd[1462]: time="2025-04-30T03:28:55.689490316Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:28:55.690751 containerd[1462]: time="2025-04-30T03:28:55.689595276Z" level=info msg="Connect containerd service" Apr 30 03:28:55.690751 containerd[1462]: time="2025-04-30T03:28:55.689656952Z" level=info msg="using legacy CRI server" Apr 30 03:28:55.690751 containerd[1462]: time="2025-04-30T03:28:55.689669466Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:28:55.690751 containerd[1462]: time="2025-04-30T03:28:55.689892426Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:28:55.697064 containerd[1462]: time="2025-04-30T03:28:55.695671324Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:28:55.697064 containerd[1462]: time="2025-04-30T03:28:55.695887851Z" level=info msg="Start subscribing containerd event" Apr 30 03:28:55.697064 containerd[1462]: time="2025-04-30T03:28:55.695972164Z" level=info msg="Start recovering state" Apr 30 03:28:55.697064 containerd[1462]: time="2025-04-30T03:28:55.696088187Z" level=info msg="Start event monitor" Apr 30 03:28:55.697064 containerd[1462]: time="2025-04-30T03:28:55.696118795Z" level=info msg="Start snapshots syncer" Apr 30 03:28:55.697064 containerd[1462]: time="2025-04-30T03:28:55.696133897Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:28:55.697064 containerd[1462]: time="2025-04-30T03:28:55.696146100Z" level=info msg="Start streaming server" Apr 30 03:28:55.703663 containerd[1462]: time="2025-04-30T03:28:55.701313193Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:28:55.703663 containerd[1462]: time="2025-04-30T03:28:55.701497361Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:28:55.703663 containerd[1462]: time="2025-04-30T03:28:55.703142068Z" level=info msg="containerd successfully booted in 0.144721s" Apr 30 03:28:55.702846 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:28:55.714187 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:28:55.734538 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:28:55.752545 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:28:55.763535 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:28:56.035545 sshd[1546]: Accepted publickey for core from 139.178.68.195 port 59670 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:28:56.038435 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:56.071667 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:28:56.088509 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:28:56.107206 systemd-logind[1443]: New session 1 of user core. Apr 30 03:28:56.136500 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:28:56.160567 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:28:56.209570 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:28:56.284535 tar[1459]: linux-amd64/LICENSE Apr 30 03:28:56.284535 tar[1459]: linux-amd64/README.md Apr 30 03:28:56.293184 instance-setup[1505]: INFO Running google_set_multiqueue. Apr 30 03:28:56.321249 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:28:56.340478 instance-setup[1505]: INFO Set channels for eth0 to 2. Apr 30 03:28:56.347454 instance-setup[1505]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Apr 30 03:28:56.350117 instance-setup[1505]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Apr 30 03:28:56.350967 instance-setup[1505]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Apr 30 03:28:56.353535 instance-setup[1505]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Apr 30 03:28:56.354345 instance-setup[1505]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Apr 30 03:28:56.356769 instance-setup[1505]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Apr 30 03:28:56.357761 instance-setup[1505]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Apr 30 03:28:56.359994 instance-setup[1505]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Apr 30 03:28:56.371160 instance-setup[1505]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 30 03:28:56.377580 instance-setup[1505]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 30 03:28:56.380141 instance-setup[1505]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Apr 30 03:28:56.380196 instance-setup[1505]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Apr 30 03:28:56.407582 init.sh[1499]: + /usr/bin/google_metadata_script_runner --script-type startup Apr 30 03:28:56.498775 systemd[1561]: Queued start job for default target default.target. Apr 30 03:28:56.507886 systemd[1561]: Created slice app.slice - User Application Slice. Apr 30 03:28:56.507959 systemd[1561]: Reached target paths.target - Paths. Apr 30 03:28:56.507988 systemd[1561]: Reached target timers.target - Timers. Apr 30 03:28:56.511437 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:28:56.545889 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:28:56.546134 systemd[1561]: Reached target sockets.target - Sockets. Apr 30 03:28:56.546162 systemd[1561]: Reached target basic.target - Basic System. Apr 30 03:28:56.546268 systemd[1561]: Reached target default.target - Main User Target. Apr 30 03:28:56.546325 systemd[1561]: Startup finished in 300ms. Apr 30 03:28:56.546360 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:28:56.563320 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:28:56.642815 startup-script[1598]: INFO Starting startup scripts. Apr 30 03:28:56.649721 startup-script[1598]: INFO No startup scripts found in metadata. Apr 30 03:28:56.649806 startup-script[1598]: INFO Finished running startup scripts. Apr 30 03:28:56.677330 init.sh[1499]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Apr 30 03:28:56.677330 init.sh[1499]: + daemon_pids=() Apr 30 03:28:56.677330 init.sh[1499]: + for d in accounts clock_skew network Apr 30 03:28:56.677330 init.sh[1499]: + daemon_pids+=($!) Apr 30 03:28:56.677330 init.sh[1499]: + for d in accounts clock_skew network Apr 30 03:28:56.677680 init.sh[1499]: + daemon_pids+=($!) Apr 30 03:28:56.677680 init.sh[1499]: + for d in accounts clock_skew network Apr 30 03:28:56.678074 init.sh[1499]: + daemon_pids+=($!) Apr 30 03:28:56.678074 init.sh[1499]: + NOTIFY_SOCKET=/run/systemd/notify Apr 30 03:28:56.678074 init.sh[1499]: + /usr/bin/systemd-notify --ready Apr 30 03:28:56.678252 init.sh[1604]: + /usr/bin/google_accounts_daemon Apr 30 03:28:56.678900 init.sh[1605]: + /usr/bin/google_clock_skew_daemon Apr 30 03:28:56.680944 init.sh[1606]: + /usr/bin/google_network_daemon Apr 30 03:28:56.709263 systemd[1]: Started oem-gce.service - GCE Linux Agent. Apr 30 03:28:56.720864 init.sh[1499]: + wait -n 1604 1605 1606 Apr 30 03:28:56.829506 systemd[1]: Started sshd@1-10.128.0.99:22-139.178.68.195:34486.service - OpenSSH per-connection server daemon (139.178.68.195:34486). Apr 30 03:28:57.179725 ntpd[1431]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:63%2]:123 Apr 30 03:28:57.181510 ntpd[1431]: 30 Apr 03:28:57 ntpd[1431]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:63%2]:123 Apr 30 03:28:57.230969 google-clock-skew[1605]: INFO Starting Google Clock Skew daemon. Apr 30 03:28:57.233479 sshd[1610]: Accepted publickey for core from 139.178.68.195 port 34486 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:28:57.235135 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:57.250959 systemd-logind[1443]: New session 2 of user core. Apr 30 03:28:57.252256 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:28:57.258786 google-clock-skew[1605]: INFO Clock drift token has changed: 0. Apr 30 03:28:57.318764 google-networking[1606]: INFO Starting Google Networking daemon. Apr 30 03:28:57.344895 groupadd[1620]: group added to /etc/group: name=google-sudoers, GID=1000 Apr 30 03:28:57.349600 groupadd[1620]: group added to /etc/gshadow: name=google-sudoers Apr 30 03:28:57.358447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:57.372156 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:28:57.377236 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:57.383707 systemd[1]: Startup finished in 1.046s (kernel) + 9.340s (initrd) + 9.344s (userspace) = 19.731s. Apr 30 03:28:57.426910 groupadd[1620]: new group: name=google-sudoers, GID=1000 Apr 30 03:28:57.456280 google-accounts[1604]: INFO Starting Google Accounts daemon. Apr 30 03:28:57.467323 sshd[1610]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:57.470412 google-accounts[1604]: WARNING OS Login not installed. Apr 30 03:28:57.473556 google-accounts[1604]: INFO Creating a new user account for 0. Apr 30 03:28:57.474688 systemd[1]: sshd@1-10.128.0.99:22-139.178.68.195:34486.service: Deactivated successfully. Apr 30 03:28:57.478821 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:28:57.481512 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:28:57.482698 init.sh[1641]: useradd: invalid user name '0': use --badname to ignore Apr 30 03:28:57.483168 google-accounts[1604]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Apr 30 03:28:57.485291 systemd-logind[1443]: Removed session 2. Apr 30 03:28:57.518527 systemd[1]: Started sshd@2-10.128.0.99:22-139.178.68.195:34500.service - OpenSSH per-connection server daemon (139.178.68.195:34500). Apr 30 03:28:58.000082 systemd-resolved[1320]: Clock change detected. Flushing caches. Apr 30 03:28:58.002358 google-clock-skew[1605]: INFO Synced system time with hardware clock. Apr 30 03:28:58.186258 sshd[1645]: Accepted publickey for core from 139.178.68.195 port 34500 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:28:58.189000 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:58.196668 systemd-logind[1443]: New session 3 of user core. Apr 30 03:28:58.203159 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:28:58.397625 sshd[1645]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:58.403706 systemd[1]: sshd@2-10.128.0.99:22-139.178.68.195:34500.service: Deactivated successfully. Apr 30 03:28:58.407109 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:28:58.408438 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:28:58.411120 systemd-logind[1443]: Removed session 3. Apr 30 03:28:58.454671 systemd[1]: Started sshd@3-10.128.0.99:22-139.178.68.195:34508.service - OpenSSH per-connection server daemon (139.178.68.195:34508). Apr 30 03:28:58.726147 kubelet[1627]: E0430 03:28:58.725978 1627 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:58.729295 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:58.729564 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:58.730056 systemd[1]: kubelet.service: Consumed 1.323s CPU time. Apr 30 03:28:58.751203 sshd[1657]: Accepted publickey for core from 139.178.68.195 port 34508 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:28:58.753079 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:58.759679 systemd-logind[1443]: New session 4 of user core. Apr 30 03:28:58.765169 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:28:58.968627 sshd[1657]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:58.974110 systemd[1]: sshd@3-10.128.0.99:22-139.178.68.195:34508.service: Deactivated successfully. Apr 30 03:28:58.976324 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:28:58.977445 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:28:58.979132 systemd-logind[1443]: Removed session 4. Apr 30 03:28:59.023333 systemd[1]: Started sshd@4-10.128.0.99:22-139.178.68.195:34518.service - OpenSSH per-connection server daemon (139.178.68.195:34518). Apr 30 03:28:59.312612 sshd[1666]: Accepted publickey for core from 139.178.68.195 port 34518 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:28:59.314507 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:59.320006 systemd-logind[1443]: New session 5 of user core. Apr 30 03:28:59.328171 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:28:59.507002 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:28:59.507509 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:59.521069 sudo[1669]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:59.564219 sshd[1666]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:59.569268 systemd[1]: sshd@4-10.128.0.99:22-139.178.68.195:34518.service: Deactivated successfully. Apr 30 03:28:59.571609 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:28:59.573675 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:28:59.575154 systemd-logind[1443]: Removed session 5. Apr 30 03:28:59.629388 systemd[1]: Started sshd@5-10.128.0.99:22-139.178.68.195:34530.service - OpenSSH per-connection server daemon (139.178.68.195:34530). Apr 30 03:28:59.911054 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 34530 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:28:59.913003 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:59.918460 systemd-logind[1443]: New session 6 of user core. Apr 30 03:28:59.926152 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:29:00.091449 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:29:00.092078 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:29:00.097308 sudo[1678]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:00.114246 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:29:00.114746 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:29:00.133352 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:29:00.138717 auditctl[1681]: No rules Apr 30 03:29:00.139266 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:29:00.139557 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:29:00.146642 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:29:00.198968 augenrules[1700]: No rules Apr 30 03:29:00.200813 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:29:00.202831 sudo[1677]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:00.246751 sshd[1674]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:00.252265 systemd[1]: sshd@5-10.128.0.99:22-139.178.68.195:34530.service: Deactivated successfully. Apr 30 03:29:00.254511 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:29:00.255575 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:29:00.256994 systemd-logind[1443]: Removed session 6. Apr 30 03:29:00.302715 systemd[1]: Started sshd@6-10.128.0.99:22-139.178.68.195:34542.service - OpenSSH per-connection server daemon (139.178.68.195:34542). Apr 30 03:29:00.594585 sshd[1708]: Accepted publickey for core from 139.178.68.195 port 34542 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:29:00.596528 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:00.602251 systemd-logind[1443]: New session 7 of user core. Apr 30 03:29:00.610392 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:29:00.775330 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:29:00.775844 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:29:01.218334 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:29:01.228531 (dockerd)[1726]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:29:01.672163 dockerd[1726]: time="2025-04-30T03:29:01.672077149Z" level=info msg="Starting up" Apr 30 03:29:01.794767 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2713774195-merged.mount: Deactivated successfully. Apr 30 03:29:01.865265 dockerd[1726]: time="2025-04-30T03:29:01.865182461Z" level=info msg="Loading containers: start." Apr 30 03:29:02.029117 kernel: Initializing XFRM netlink socket Apr 30 03:29:02.143855 systemd-networkd[1374]: docker0: Link UP Apr 30 03:29:02.166317 dockerd[1726]: time="2025-04-30T03:29:02.166261934Z" level=info msg="Loading containers: done." Apr 30 03:29:02.186386 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1092876643-merged.mount: Deactivated successfully. Apr 30 03:29:02.189482 dockerd[1726]: time="2025-04-30T03:29:02.189413118Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:29:02.189634 dockerd[1726]: time="2025-04-30T03:29:02.189557388Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:29:02.189782 dockerd[1726]: time="2025-04-30T03:29:02.189734747Z" level=info msg="Daemon has completed initialization" Apr 30 03:29:02.232504 dockerd[1726]: time="2025-04-30T03:29:02.232035786Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:29:02.232420 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:29:03.281963 containerd[1462]: time="2025-04-30T03:29:03.281838647Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:29:03.750556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734669398.mount: Deactivated successfully. Apr 30 03:29:05.434632 containerd[1462]: time="2025-04-30T03:29:05.434552396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:05.436582 containerd[1462]: time="2025-04-30T03:29:05.436517276Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32681501" Apr 30 03:29:05.438092 containerd[1462]: time="2025-04-30T03:29:05.437960816Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:05.443594 containerd[1462]: time="2025-04-30T03:29:05.443534712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:05.445803 containerd[1462]: time="2025-04-30T03:29:05.444769518Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.162873769s" Apr 30 03:29:05.445803 containerd[1462]: time="2025-04-30T03:29:05.445195765Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:29:05.481042 containerd[1462]: time="2025-04-30T03:29:05.480996143Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:29:07.173627 containerd[1462]: time="2025-04-30T03:29:07.173557016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.175220 containerd[1462]: time="2025-04-30T03:29:07.175131803Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29619468" Apr 30 03:29:07.176753 containerd[1462]: time="2025-04-30T03:29:07.176677266Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.182780 containerd[1462]: time="2025-04-30T03:29:07.182691892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.184698 containerd[1462]: time="2025-04-30T03:29:07.184400272Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.703333699s" Apr 30 03:29:07.184698 containerd[1462]: time="2025-04-30T03:29:07.184523881Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:29:07.218530 containerd[1462]: time="2025-04-30T03:29:07.217643850Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:29:08.322674 containerd[1462]: time="2025-04-30T03:29:08.322603828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:08.324378 containerd[1462]: time="2025-04-30T03:29:08.324306063Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17905598" Apr 30 03:29:08.325562 containerd[1462]: time="2025-04-30T03:29:08.325476503Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:08.329298 containerd[1462]: time="2025-04-30T03:29:08.329212918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:08.330850 containerd[1462]: time="2025-04-30T03:29:08.330674965Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.11297894s" Apr 30 03:29:08.330850 containerd[1462]: time="2025-04-30T03:29:08.330725423Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:29:08.363426 containerd[1462]: time="2025-04-30T03:29:08.363369064Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:29:08.741577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:29:08.749261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:09.125730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:09.139870 (kubelet)[1958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:29:09.237925 kubelet[1958]: E0430 03:29:09.236643 1958 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:29:09.243137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:29:09.243388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:29:09.677048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495072689.mount: Deactivated successfully. Apr 30 03:29:10.239002 containerd[1462]: time="2025-04-30T03:29:10.238936255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:10.240215 containerd[1462]: time="2025-04-30T03:29:10.240141994Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29187712" Apr 30 03:29:10.241564 containerd[1462]: time="2025-04-30T03:29:10.241498173Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:10.244369 containerd[1462]: time="2025-04-30T03:29:10.244325362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:10.246122 containerd[1462]: time="2025-04-30T03:29:10.245329432Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.8819077s" Apr 30 03:29:10.246122 containerd[1462]: time="2025-04-30T03:29:10.245382753Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:29:10.275506 containerd[1462]: time="2025-04-30T03:29:10.275435276Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:29:10.690028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3090008166.mount: Deactivated successfully. Apr 30 03:29:11.747255 containerd[1462]: time="2025-04-30T03:29:11.747181213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:11.749118 containerd[1462]: time="2025-04-30T03:29:11.749038763Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Apr 30 03:29:11.750254 containerd[1462]: time="2025-04-30T03:29:11.750208304Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:11.756486 containerd[1462]: time="2025-04-30T03:29:11.755705954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:11.757689 containerd[1462]: time="2025-04-30T03:29:11.757637329Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.482149323s" Apr 30 03:29:11.757802 containerd[1462]: time="2025-04-30T03:29:11.757694660Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:29:11.792070 containerd[1462]: time="2025-04-30T03:29:11.792021232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:29:12.168515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932013181.mount: Deactivated successfully. Apr 30 03:29:12.176175 containerd[1462]: time="2025-04-30T03:29:12.176106974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:12.177405 containerd[1462]: time="2025-04-30T03:29:12.177340986Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Apr 30 03:29:12.178641 containerd[1462]: time="2025-04-30T03:29:12.178547170Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:12.182842 containerd[1462]: time="2025-04-30T03:29:12.182768147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:12.185111 containerd[1462]: time="2025-04-30T03:29:12.184023309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 391.950397ms" Apr 30 03:29:12.185111 containerd[1462]: time="2025-04-30T03:29:12.184069893Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:29:12.214566 containerd[1462]: time="2025-04-30T03:29:12.214509454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:29:12.592832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904772317.mount: Deactivated successfully. Apr 30 03:29:14.764934 containerd[1462]: time="2025-04-30T03:29:14.764849213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:14.766638 containerd[1462]: time="2025-04-30T03:29:14.766569252Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Apr 30 03:29:14.767752 containerd[1462]: time="2025-04-30T03:29:14.767670446Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:14.771805 containerd[1462]: time="2025-04-30T03:29:14.771705346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:14.773547 containerd[1462]: time="2025-04-30T03:29:14.773355647Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.558794817s" Apr 30 03:29:14.773547 containerd[1462]: time="2025-04-30T03:29:14.773410870Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:29:18.003797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:18.012408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:18.051499 systemd[1]: Reloading requested from client PID 2140 ('systemctl') (unit session-7.scope)... Apr 30 03:29:18.051534 systemd[1]: Reloading... Apr 30 03:29:18.226366 zram_generator::config[2181]: No configuration found. Apr 30 03:29:18.369548 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:18.475828 systemd[1]: Reloading finished in 423 ms. Apr 30 03:29:18.542672 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:29:18.542819 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:29:18.543194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:18.549473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:19.251211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:19.251642 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:29:19.324551 kubelet[2232]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:19.324551 kubelet[2232]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:29:19.325119 kubelet[2232]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:19.325119 kubelet[2232]: I0430 03:29:19.324657 2232 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:29:19.921249 kubelet[2232]: I0430 03:29:19.921191 2232 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:29:19.921249 kubelet[2232]: I0430 03:29:19.921229 2232 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:29:19.921567 kubelet[2232]: I0430 03:29:19.921534 2232 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:29:19.956923 kubelet[2232]: I0430 03:29:19.955678 2232 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:19.957447 kubelet[2232]: E0430 03:29:19.957419 2232 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:19.974544 kubelet[2232]: I0430 03:29:19.974499 2232 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:29:19.979066 kubelet[2232]: I0430 03:29:19.978979 2232 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:29:19.979348 kubelet[2232]: I0430 03:29:19.979058 2232 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:29:19.979623 kubelet[2232]: I0430 03:29:19.979359 2232 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:29:19.979623 kubelet[2232]: I0430 03:29:19.979379 2232 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:29:19.981283 kubelet[2232]: I0430 03:29:19.981227 2232 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:19.982703 kubelet[2232]: I0430 03:29:19.982662 2232 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:29:19.982703 kubelet[2232]: I0430 03:29:19.982702 2232 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:29:19.983121 kubelet[2232]: I0430 03:29:19.982752 2232 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:29:19.983121 kubelet[2232]: I0430 03:29:19.982790 2232 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:29:19.990405 kubelet[2232]: W0430 03:29:19.990343 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:19.990794 kubelet[2232]: E0430 03:29:19.990599 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:19.992864 kubelet[2232]: W0430 03:29:19.992391 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:19.992864 kubelet[2232]: E0430 03:29:19.992465 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:19.992864 kubelet[2232]: I0430 03:29:19.992591 2232 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:29:19.995319 kubelet[2232]: I0430 03:29:19.995274 2232 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:29:19.995428 kubelet[2232]: W0430 03:29:19.995366 2232 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:29:19.996571 kubelet[2232]: I0430 03:29:19.996119 2232 server.go:1264] "Started kubelet" Apr 30 03:29:19.999286 kubelet[2232]: I0430 03:29:19.998424 2232 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:29:20.000937 kubelet[2232]: I0430 03:29:19.999834 2232 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:29:20.003302 kubelet[2232]: I0430 03:29:20.003250 2232 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:29:20.005278 kubelet[2232]: I0430 03:29:20.005199 2232 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:29:20.006705 kubelet[2232]: I0430 03:29:20.005526 2232 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:29:20.006705 kubelet[2232]: E0430 03:29:20.005784 2232 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal.183afafaa62ae67a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal,UID:ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal,},FirstTimestamp:2025-04-30 03:29:19.996085882 +0000 UTC m=+0.737213047,LastTimestamp:2025-04-30 03:29:19.996085882 +0000 UTC m=+0.737213047,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal,}" Apr 30 03:29:20.015582 kubelet[2232]: E0430 03:29:20.013964 2232 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" not found" Apr 30 03:29:20.015582 kubelet[2232]: I0430 03:29:20.014337 2232 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:29:20.015582 kubelet[2232]: I0430 03:29:20.014668 2232 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:29:20.016651 kubelet[2232]: I0430 03:29:20.016623 2232 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:29:20.018398 kubelet[2232]: W0430 03:29:20.018322 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:20.018597 kubelet[2232]: E0430 03:29:20.018577 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:20.018857 kubelet[2232]: E0430 03:29:20.018807 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.99:6443: connect: connection refused" interval="200ms" Apr 30 03:29:20.019448 kubelet[2232]: I0430 03:29:20.019417 2232 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:29:20.025070 kubelet[2232]: I0430 03:29:20.025039 2232 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:29:20.025245 kubelet[2232]: I0430 03:29:20.025233 2232 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:29:20.042558 kubelet[2232]: E0430 03:29:20.042519 2232 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:29:20.048384 kubelet[2232]: I0430 03:29:20.048310 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:29:20.050753 kubelet[2232]: I0430 03:29:20.050705 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:29:20.051235 kubelet[2232]: I0430 03:29:20.051097 2232 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:29:20.051526 kubelet[2232]: I0430 03:29:20.051450 2232 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:29:20.052682 kubelet[2232]: E0430 03:29:20.051836 2232 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:29:20.054185 kubelet[2232]: W0430 03:29:20.054107 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:20.054412 kubelet[2232]: E0430 03:29:20.054381 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:20.061598 kubelet[2232]: I0430 03:29:20.061567 2232 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:29:20.061786 kubelet[2232]: I0430 03:29:20.061714 2232 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:29:20.061786 kubelet[2232]: I0430 03:29:20.061765 2232 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:20.065171 kubelet[2232]: I0430 03:29:20.065133 2232 policy_none.go:49] "None policy: Start" Apr 30 03:29:20.066332 kubelet[2232]: I0430 03:29:20.066221 2232 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:29:20.066626 kubelet[2232]: I0430 03:29:20.066526 2232 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:29:20.076012 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:29:20.096259 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:29:20.101425 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:29:20.114833 kubelet[2232]: I0430 03:29:20.114309 2232 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:29:20.114833 kubelet[2232]: I0430 03:29:20.114617 2232 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:29:20.114833 kubelet[2232]: I0430 03:29:20.114787 2232 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:29:20.118227 kubelet[2232]: E0430 03:29:20.118191 2232 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" not found" Apr 30 03:29:20.121168 kubelet[2232]: I0430 03:29:20.120764 2232 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.121493 kubelet[2232]: E0430 03:29:20.121427 2232 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.99:6443/api/v1/nodes\": dial tcp 10.128.0.99:6443: connect: connection refused" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.152912 kubelet[2232]: I0430 03:29:20.152760 2232 topology_manager.go:215] "Topology Admit Handler" podUID="4c98d0d0e47bcd40a88f3947740b8122" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.159293 kubelet[2232]: I0430 03:29:20.159222 2232 topology_manager.go:215] "Topology Admit Handler" podUID="4005fa3515b9e592f96c95ab1ac63038" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.169182 kubelet[2232]: I0430 03:29:20.168825 2232 topology_manager.go:215] "Topology Admit Handler" podUID="b60fadb8e4d6c1667ecc38ee5beb16ee" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.177224 systemd[1]: Created slice kubepods-burstable-pod4c98d0d0e47bcd40a88f3947740b8122.slice - libcontainer container kubepods-burstable-pod4c98d0d0e47bcd40a88f3947740b8122.slice. Apr 30 03:29:20.197406 systemd[1]: Created slice kubepods-burstable-pod4005fa3515b9e592f96c95ab1ac63038.slice - libcontainer container kubepods-burstable-pod4005fa3515b9e592f96c95ab1ac63038.slice. Apr 30 03:29:20.211053 systemd[1]: Created slice kubepods-burstable-podb60fadb8e4d6c1667ecc38ee5beb16ee.slice - libcontainer container kubepods-burstable-podb60fadb8e4d6c1667ecc38ee5beb16ee.slice. Apr 30 03:29:20.219565 kubelet[2232]: E0430 03:29:20.219506 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.99:6443: connect: connection refused" interval="400ms" Apr 30 03:29:20.317795 kubelet[2232]: I0430 03:29:20.317743 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4005fa3515b9e592f96c95ab1ac63038-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4005fa3515b9e592f96c95ab1ac63038\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.317795 kubelet[2232]: I0430 03:29:20.317817 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4005fa3515b9e592f96c95ab1ac63038-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4005fa3515b9e592f96c95ab1ac63038\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.318308 kubelet[2232]: I0430 03:29:20.317869 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b60fadb8e4d6c1667ecc38ee5beb16ee-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"b60fadb8e4d6c1667ecc38ee5beb16ee\") " pod="kube-system/kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.318308 kubelet[2232]: I0430 03:29:20.317921 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c98d0d0e47bcd40a88f3947740b8122-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4c98d0d0e47bcd40a88f3947740b8122\") " pod="kube-system/kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.318308 kubelet[2232]: I0430 03:29:20.317954 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c98d0d0e47bcd40a88f3947740b8122-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4c98d0d0e47bcd40a88f3947740b8122\") " pod="kube-system/kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.318308 kubelet[2232]: I0430 03:29:20.317988 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4005fa3515b9e592f96c95ab1ac63038-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4005fa3515b9e592f96c95ab1ac63038\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.318454 kubelet[2232]: I0430 03:29:20.318015 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4005fa3515b9e592f96c95ab1ac63038-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4005fa3515b9e592f96c95ab1ac63038\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.318454 kubelet[2232]: I0430 03:29:20.318040 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c98d0d0e47bcd40a88f3947740b8122-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4c98d0d0e47bcd40a88f3947740b8122\") " pod="kube-system/kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.318454 kubelet[2232]: I0430 03:29:20.318067 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4005fa3515b9e592f96c95ab1ac63038-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4005fa3515b9e592f96c95ab1ac63038\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.327047 kubelet[2232]: I0430 03:29:20.327011 2232 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.327588 kubelet[2232]: E0430 03:29:20.327442 2232 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.99:6443/api/v1/nodes\": dial tcp 10.128.0.99:6443: connect: connection refused" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.495414 containerd[1462]: time="2025-04-30T03:29:20.495359438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal,Uid:4c98d0d0e47bcd40a88f3947740b8122,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:20.508511 containerd[1462]: time="2025-04-30T03:29:20.508445214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal,Uid:4005fa3515b9e592f96c95ab1ac63038,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:20.520753 containerd[1462]: time="2025-04-30T03:29:20.520586030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal,Uid:b60fadb8e4d6c1667ecc38ee5beb16ee,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:20.620273 kubelet[2232]: E0430 03:29:20.620203 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.99:6443: connect: connection refused" interval="800ms" Apr 30 03:29:20.733497 kubelet[2232]: I0430 03:29:20.733439 2232 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.734029 kubelet[2232]: E0430 03:29:20.733965 2232 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.99:6443/api/v1/nodes\": dial tcp 10.128.0.99:6443: connect: connection refused" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:20.869805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount563982971.mount: Deactivated successfully. Apr 30 03:29:20.879642 containerd[1462]: time="2025-04-30T03:29:20.879579959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:20.880962 containerd[1462]: time="2025-04-30T03:29:20.880908578Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:20.882167 containerd[1462]: time="2025-04-30T03:29:20.882101221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:29:20.883297 containerd[1462]: time="2025-04-30T03:29:20.883230425Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Apr 30 03:29:20.885055 containerd[1462]: time="2025-04-30T03:29:20.884996607Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:20.886855 containerd[1462]: time="2025-04-30T03:29:20.886510546Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:20.886855 containerd[1462]: time="2025-04-30T03:29:20.886736285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:29:20.896955 containerd[1462]: time="2025-04-30T03:29:20.896455916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:20.900169 containerd[1462]: time="2025-04-30T03:29:20.899453121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 390.912055ms" Apr 30 03:29:20.901997 containerd[1462]: time="2025-04-30T03:29:20.901948225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 406.481685ms" Apr 30 03:29:20.903197 containerd[1462]: time="2025-04-30T03:29:20.902862322Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 382.165574ms" Apr 30 03:29:21.080629 kubelet[2232]: W0430 03:29:21.080541 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:21.080629 kubelet[2232]: E0430 03:29:21.080601 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:21.096079 containerd[1462]: time="2025-04-30T03:29:21.095508145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:21.096079 containerd[1462]: time="2025-04-30T03:29:21.095593107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:21.096079 containerd[1462]: time="2025-04-30T03:29:21.095635615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:21.096079 containerd[1462]: time="2025-04-30T03:29:21.095775636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:21.104919 containerd[1462]: time="2025-04-30T03:29:21.104327640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:21.107967 containerd[1462]: time="2025-04-30T03:29:21.105487894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:21.107967 containerd[1462]: time="2025-04-30T03:29:21.105520841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:21.107967 containerd[1462]: time="2025-04-30T03:29:21.105649307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:21.120610 containerd[1462]: time="2025-04-30T03:29:21.120361824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:21.121418 containerd[1462]: time="2025-04-30T03:29:21.120587205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:21.121685 containerd[1462]: time="2025-04-30T03:29:21.121455457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:21.122585 kubelet[2232]: W0430 03:29:21.122458 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:21.122585 kubelet[2232]: E0430 03:29:21.122561 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:21.124013 containerd[1462]: time="2025-04-30T03:29:21.123696609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:21.150161 systemd[1]: Started cri-containerd-dc055f61d80c2c8b3e92533ee80ff109b091b1b0c26c966740ad7863f1426887.scope - libcontainer container dc055f61d80c2c8b3e92533ee80ff109b091b1b0c26c966740ad7863f1426887. Apr 30 03:29:21.167720 kubelet[2232]: W0430 03:29:21.167281 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:21.167720 kubelet[2232]: E0430 03:29:21.167335 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:21.170452 systemd[1]: Started cri-containerd-6f9c35b01c7373481f0f81248dffdc3ccae7e0904747a70ca289dec03805e7ce.scope - libcontainer container 6f9c35b01c7373481f0f81248dffdc3ccae7e0904747a70ca289dec03805e7ce. Apr 30 03:29:21.191718 systemd[1]: Started cri-containerd-8977e6ca57f08510ce447f12937cb58df291f73aa4fbb82c465315d49135cff0.scope - libcontainer container 8977e6ca57f08510ce447f12937cb58df291f73aa4fbb82c465315d49135cff0. Apr 30 03:29:21.264334 containerd[1462]: time="2025-04-30T03:29:21.263657668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal,Uid:4c98d0d0e47bcd40a88f3947740b8122,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc055f61d80c2c8b3e92533ee80ff109b091b1b0c26c966740ad7863f1426887\"" Apr 30 03:29:21.269546 kubelet[2232]: E0430 03:29:21.269495 2232 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-21291" Apr 30 03:29:21.276841 containerd[1462]: time="2025-04-30T03:29:21.275546261Z" level=info msg="CreateContainer within sandbox \"dc055f61d80c2c8b3e92533ee80ff109b091b1b0c26c966740ad7863f1426887\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:29:21.285154 containerd[1462]: time="2025-04-30T03:29:21.284953766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal,Uid:4005fa3515b9e592f96c95ab1ac63038,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f9c35b01c7373481f0f81248dffdc3ccae7e0904747a70ca289dec03805e7ce\"" Apr 30 03:29:21.289005 kubelet[2232]: E0430 03:29:21.288960 2232 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flat" Apr 30 03:29:21.292784 containerd[1462]: time="2025-04-30T03:29:21.292732442Z" level=info msg="CreateContainer within sandbox \"6f9c35b01c7373481f0f81248dffdc3ccae7e0904747a70ca289dec03805e7ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:29:21.300450 containerd[1462]: time="2025-04-30T03:29:21.300401369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal,Uid:b60fadb8e4d6c1667ecc38ee5beb16ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"8977e6ca57f08510ce447f12937cb58df291f73aa4fbb82c465315d49135cff0\"" Apr 30 03:29:21.302799 kubelet[2232]: E0430 03:29:21.302760 2232 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-21291" Apr 30 03:29:21.304571 containerd[1462]: time="2025-04-30T03:29:21.304523602Z" level=info msg="CreateContainer within sandbox \"8977e6ca57f08510ce447f12937cb58df291f73aa4fbb82c465315d49135cff0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:29:21.306511 containerd[1462]: time="2025-04-30T03:29:21.306461814Z" level=info msg="CreateContainer within sandbox \"dc055f61d80c2c8b3e92533ee80ff109b091b1b0c26c966740ad7863f1426887\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9265ad6f0acf1ca83bcc8ad7f19781a8e22ad5b6704653c93b8e58f869bed72a\"" Apr 30 03:29:21.307315 containerd[1462]: time="2025-04-30T03:29:21.307278683Z" level=info msg="StartContainer for \"9265ad6f0acf1ca83bcc8ad7f19781a8e22ad5b6704653c93b8e58f869bed72a\"" Apr 30 03:29:21.325289 containerd[1462]: time="2025-04-30T03:29:21.325229052Z" level=info msg="CreateContainer within sandbox \"6f9c35b01c7373481f0f81248dffdc3ccae7e0904747a70ca289dec03805e7ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b939da2b87fbfd2e5f00b6a6f1aadd46add00f6d7c76162d90d7b9ef54d8adf1\"" Apr 30 03:29:21.326352 containerd[1462]: time="2025-04-30T03:29:21.326307884Z" level=info msg="StartContainer for \"b939da2b87fbfd2e5f00b6a6f1aadd46add00f6d7c76162d90d7b9ef54d8adf1\"" Apr 30 03:29:21.330415 containerd[1462]: time="2025-04-30T03:29:21.330371191Z" level=info msg="CreateContainer within sandbox \"8977e6ca57f08510ce447f12937cb58df291f73aa4fbb82c465315d49135cff0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"94205535aed00b38994baa8d4496f1d3c9633c585e14dbf05d3aaded34c3685e\"" Apr 30 03:29:21.332565 containerd[1462]: time="2025-04-30T03:29:21.332525773Z" level=info msg="StartContainer for \"94205535aed00b38994baa8d4496f1d3c9633c585e14dbf05d3aaded34c3685e\"" Apr 30 03:29:21.355158 systemd[1]: Started cri-containerd-9265ad6f0acf1ca83bcc8ad7f19781a8e22ad5b6704653c93b8e58f869bed72a.scope - libcontainer container 9265ad6f0acf1ca83bcc8ad7f19781a8e22ad5b6704653c93b8e58f869bed72a. Apr 30 03:29:21.364410 kubelet[2232]: W0430 03:29:21.364036 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:21.364410 kubelet[2232]: E0430 03:29:21.364143 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.99:6443: connect: connection refused Apr 30 03:29:21.396158 systemd[1]: Started cri-containerd-b939da2b87fbfd2e5f00b6a6f1aadd46add00f6d7c76162d90d7b9ef54d8adf1.scope - libcontainer container b939da2b87fbfd2e5f00b6a6f1aadd46add00f6d7c76162d90d7b9ef54d8adf1. Apr 30 03:29:21.409490 systemd[1]: Started cri-containerd-94205535aed00b38994baa8d4496f1d3c9633c585e14dbf05d3aaded34c3685e.scope - libcontainer container 94205535aed00b38994baa8d4496f1d3c9633c585e14dbf05d3aaded34c3685e. Apr 30 03:29:21.421742 kubelet[2232]: E0430 03:29:21.421679 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.99:6443: connect: connection refused" interval="1.6s" Apr 30 03:29:21.484879 containerd[1462]: time="2025-04-30T03:29:21.484820532Z" level=info msg="StartContainer for \"9265ad6f0acf1ca83bcc8ad7f19781a8e22ad5b6704653c93b8e58f869bed72a\" returns successfully" Apr 30 03:29:21.540149 containerd[1462]: time="2025-04-30T03:29:21.540094488Z" level=info msg="StartContainer for \"94205535aed00b38994baa8d4496f1d3c9633c585e14dbf05d3aaded34c3685e\" returns successfully" Apr 30 03:29:21.543083 kubelet[2232]: I0430 03:29:21.542456 2232 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:21.550937 containerd[1462]: time="2025-04-30T03:29:21.550587582Z" level=info msg="StartContainer for \"b939da2b87fbfd2e5f00b6a6f1aadd46add00f6d7c76162d90d7b9ef54d8adf1\" returns successfully" Apr 30 03:29:21.554866 kubelet[2232]: E0430 03:29:21.554258 2232 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.99:6443/api/v1/nodes\": dial tcp 10.128.0.99:6443: connect: connection refused" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:23.160644 kubelet[2232]: I0430 03:29:23.160593 2232 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:24.992081 kubelet[2232]: I0430 03:29:24.991795 2232 apiserver.go:52] "Watching apiserver" Apr 30 03:29:25.006806 kubelet[2232]: E0430 03:29:25.006710 2232 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" not found" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:25.016270 kubelet[2232]: I0430 03:29:25.016195 2232 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:29:25.095747 kubelet[2232]: I0430 03:29:25.095434 2232 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:25.817997 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 03:29:26.083993 kubelet[2232]: W0430 03:29:26.083817 2232 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 30 03:29:26.973648 kubelet[2232]: W0430 03:29:26.973438 2232 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 30 03:29:27.160143 kubelet[2232]: W0430 03:29:27.159518 2232 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 30 03:29:27.225348 systemd[1]: Reloading requested from client PID 2514 ('systemctl') (unit session-7.scope)... Apr 30 03:29:27.225372 systemd[1]: Reloading... Apr 30 03:29:27.363938 zram_generator::config[2557]: No configuration found. Apr 30 03:29:27.521865 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:27.651621 systemd[1]: Reloading finished in 425 ms. Apr 30 03:29:27.711719 kubelet[2232]: I0430 03:29:27.711657 2232 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:27.712120 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:27.723993 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:29:27.724337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:27.724418 systemd[1]: kubelet.service: Consumed 1.248s CPU time, 116.4M memory peak, 0B memory swap peak. Apr 30 03:29:27.733395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:28.008230 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:28.018684 (kubelet)[2602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:29:28.102931 kubelet[2602]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:28.102931 kubelet[2602]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:29:28.102931 kubelet[2602]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:28.102931 kubelet[2602]: I0430 03:29:28.102280 2602 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:29:28.110630 kubelet[2602]: I0430 03:29:28.110598 2602 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:29:28.110804 kubelet[2602]: I0430 03:29:28.110793 2602 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:29:28.111146 kubelet[2602]: I0430 03:29:28.111125 2602 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:29:28.112943 kubelet[2602]: I0430 03:29:28.112870 2602 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:29:28.114591 kubelet[2602]: I0430 03:29:28.114400 2602 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:28.124636 kubelet[2602]: I0430 03:29:28.124603 2602 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:29:28.125143 kubelet[2602]: I0430 03:29:28.125099 2602 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:29:28.125422 kubelet[2602]: I0430 03:29:28.125140 2602 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:29:28.125613 kubelet[2602]: I0430 03:29:28.125441 2602 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:29:28.125613 kubelet[2602]: I0430 03:29:28.125460 2602 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:29:28.125613 kubelet[2602]: I0430 03:29:28.125531 2602 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:28.125781 kubelet[2602]: I0430 03:29:28.125675 2602 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:29:28.125781 kubelet[2602]: I0430 03:29:28.125716 2602 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:29:28.125781 kubelet[2602]: I0430 03:29:28.125750 2602 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:29:28.125781 kubelet[2602]: I0430 03:29:28.125775 2602 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:29:28.130397 kubelet[2602]: I0430 03:29:28.130360 2602 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:29:28.130931 kubelet[2602]: I0430 03:29:28.130637 2602 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:29:28.132604 kubelet[2602]: I0430 03:29:28.132565 2602 server.go:1264] "Started kubelet" Apr 30 03:29:28.140921 kubelet[2602]: I0430 03:29:28.138819 2602 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:29:28.153535 kubelet[2602]: I0430 03:29:28.153479 2602 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:29:28.162514 kubelet[2602]: I0430 03:29:28.162480 2602 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:29:28.165073 kubelet[2602]: I0430 03:29:28.165041 2602 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:29:28.169047 kubelet[2602]: I0430 03:29:28.168945 2602 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:29:28.172677 kubelet[2602]: I0430 03:29:28.172648 2602 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:29:28.182917 kubelet[2602]: I0430 03:29:28.173103 2602 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:29:28.183277 kubelet[2602]: I0430 03:29:28.183242 2602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:29:28.185652 kubelet[2602]: I0430 03:29:28.185040 2602 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:29:28.188228 kubelet[2602]: I0430 03:29:28.188046 2602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:29:28.188364 kubelet[2602]: I0430 03:29:28.188254 2602 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:29:28.188364 kubelet[2602]: I0430 03:29:28.188317 2602 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:29:28.188471 kubelet[2602]: E0430 03:29:28.188419 2602 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:29:28.191151 kubelet[2602]: I0430 03:29:28.191119 2602 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:29:28.191406 kubelet[2602]: I0430 03:29:28.191379 2602 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:29:28.199937 kubelet[2602]: I0430 03:29:28.199875 2602 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:29:28.206542 kubelet[2602]: E0430 03:29:28.206484 2602 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:29:28.274505 kubelet[2602]: I0430 03:29:28.272632 2602 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.282025 kubelet[2602]: I0430 03:29:28.281981 2602 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:29:28.282025 kubelet[2602]: I0430 03:29:28.282006 2602 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:29:28.282248 kubelet[2602]: I0430 03:29:28.282055 2602 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:28.282486 kubelet[2602]: I0430 03:29:28.282407 2602 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:29:28.282747 kubelet[2602]: I0430 03:29:28.282432 2602 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:29:28.283000 kubelet[2602]: I0430 03:29:28.282865 2602 policy_none.go:49] "None policy: Start" Apr 30 03:29:28.287510 kubelet[2602]: I0430 03:29:28.286224 2602 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:29:28.287510 kubelet[2602]: I0430 03:29:28.287001 2602 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:29:28.287510 kubelet[2602]: I0430 03:29:28.287395 2602 state_mem.go:75] "Updated machine memory state" Apr 30 03:29:28.291395 kubelet[2602]: E0430 03:29:28.289130 2602 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 03:29:28.291395 kubelet[2602]: I0430 03:29:28.289516 2602 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.291395 kubelet[2602]: I0430 03:29:28.289606 2602 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.310028 kubelet[2602]: I0430 03:29:28.309934 2602 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:29:28.310246 kubelet[2602]: I0430 03:29:28.310197 2602 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:29:28.310387 kubelet[2602]: I0430 03:29:28.310368 2602 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:29:28.490795 kubelet[2602]: I0430 03:29:28.489489 2602 topology_manager.go:215] "Topology Admit Handler" podUID="4c98d0d0e47bcd40a88f3947740b8122" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.490795 kubelet[2602]: I0430 03:29:28.489636 2602 topology_manager.go:215] "Topology Admit Handler" podUID="4005fa3515b9e592f96c95ab1ac63038" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.490795 kubelet[2602]: I0430 03:29:28.489734 2602 topology_manager.go:215] "Topology Admit Handler" podUID="b60fadb8e4d6c1667ecc38ee5beb16ee" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.507078 kubelet[2602]: W0430 03:29:28.507041 2602 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 30 03:29:28.507398 kubelet[2602]: W0430 03:29:28.507040 2602 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 30 03:29:28.507612 kubelet[2602]: E0430 03:29:28.507578 2602 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.507837 kubelet[2602]: E0430 03:29:28.507720 2602 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.508047 kubelet[2602]: W0430 03:29:28.507068 2602 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 30 03:29:28.508250 kubelet[2602]: E0430 03:29:28.508210 2602 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.588153 kubelet[2602]: I0430 03:29:28.587631 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4005fa3515b9e592f96c95ab1ac63038-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4005fa3515b9e592f96c95ab1ac63038\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.588153 kubelet[2602]: I0430 03:29:28.587719 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b60fadb8e4d6c1667ecc38ee5beb16ee-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"b60fadb8e4d6c1667ecc38ee5beb16ee\") " pod="kube-system/kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.588153 kubelet[2602]: I0430 03:29:28.587754 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4005fa3515b9e592f96c95ab1ac63038-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4005fa3515b9e592f96c95ab1ac63038\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.588153 kubelet[2602]: I0430 03:29:28.587784 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c98d0d0e47bcd40a88f3947740b8122-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4c98d0d0e47bcd40a88f3947740b8122\") " pod="kube-system/kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.588747 kubelet[2602]: I0430 03:29:28.587828 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c98d0d0e47bcd40a88f3947740b8122-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4c98d0d0e47bcd40a88f3947740b8122\") " pod="kube-system/kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.588747 kubelet[2602]: I0430 03:29:28.587863 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4005fa3515b9e592f96c95ab1ac63038-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4005fa3515b9e592f96c95ab1ac63038\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.588747 kubelet[2602]: I0430 03:29:28.587918 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4005fa3515b9e592f96c95ab1ac63038-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4005fa3515b9e592f96c95ab1ac63038\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.588747 kubelet[2602]: I0430 03:29:28.588205 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4005fa3515b9e592f96c95ab1ac63038-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4005fa3515b9e592f96c95ab1ac63038\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:28.590428 kubelet[2602]: I0430 03:29:28.588274 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c98d0d0e47bcd40a88f3947740b8122-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" (UID: \"4c98d0d0e47bcd40a88f3947740b8122\") " pod="kube-system/kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:29.126952 kubelet[2602]: I0430 03:29:29.126839 2602 apiserver.go:52] "Watching apiserver" Apr 30 03:29:29.182709 kubelet[2602]: I0430 03:29:29.182431 2602 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:29:29.244423 kubelet[2602]: W0430 03:29:29.244033 2602 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 30 03:29:29.244423 kubelet[2602]: E0430 03:29:29.244122 2602 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:29:29.285686 kubelet[2602]: I0430 03:29:29.285601 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" podStartSLOduration=3.2855762090000002 podStartE2EDuration="3.285576209s" podCreationTimestamp="2025-04-30 03:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:29.285038435 +0000 UTC m=+1.259680181" watchObservedRunningTime="2025-04-30 03:29:29.285576209 +0000 UTC m=+1.260217956" Apr 30 03:29:29.286040 kubelet[2602]: I0430 03:29:29.285792 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" podStartSLOduration=2.285781901 podStartE2EDuration="2.285781901s" podCreationTimestamp="2025-04-30 03:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:29.272306876 +0000 UTC m=+1.246948623" watchObservedRunningTime="2025-04-30 03:29:29.285781901 +0000 UTC m=+1.260423653" Apr 30 03:29:31.204512 kubelet[2602]: I0430 03:29:31.204357 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" podStartSLOduration=5.204332595 podStartE2EDuration="5.204332595s" podCreationTimestamp="2025-04-30 03:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:29.298394646 +0000 UTC m=+1.273036392" watchObservedRunningTime="2025-04-30 03:29:31.204332595 +0000 UTC m=+3.178974340" Apr 30 03:29:34.179364 sudo[1711]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:34.223013 sshd[1708]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:34.227966 systemd[1]: sshd@6-10.128.0.99:22-139.178.68.195:34542.service: Deactivated successfully. Apr 30 03:29:34.230809 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:29:34.231092 systemd[1]: session-7.scope: Consumed 6.311s CPU time, 192.8M memory peak, 0B memory swap peak. Apr 30 03:29:34.232928 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:29:34.234932 systemd-logind[1443]: Removed session 7. Apr 30 03:29:40.032009 update_engine[1447]: I20250430 03:29:40.031877 1447 update_attempter.cc:509] Updating boot flags... Apr 30 03:29:40.100924 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2687) Apr 30 03:29:40.232323 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2688) Apr 30 03:29:40.367449 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2688) Apr 30 03:29:41.822745 kubelet[2602]: I0430 03:29:41.822699 2602 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:29:41.823769 containerd[1462]: time="2025-04-30T03:29:41.823229552Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:29:41.824378 kubelet[2602]: I0430 03:29:41.823959 2602 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:29:42.439480 kubelet[2602]: I0430 03:29:42.438820 2602 topology_manager.go:215] "Topology Admit Handler" podUID="14794fe0-d088-499f-b56b-8b4f8c649184" podNamespace="kube-system" podName="kube-proxy-5b9z2" Apr 30 03:29:42.456178 systemd[1]: Created slice kubepods-besteffort-pod14794fe0_d088_499f_b56b_8b4f8c649184.slice - libcontainer container kubepods-besteffort-pod14794fe0_d088_499f_b56b_8b4f8c649184.slice. Apr 30 03:29:42.482597 kubelet[2602]: I0430 03:29:42.482065 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14794fe0-d088-499f-b56b-8b4f8c649184-xtables-lock\") pod \"kube-proxy-5b9z2\" (UID: \"14794fe0-d088-499f-b56b-8b4f8c649184\") " pod="kube-system/kube-proxy-5b9z2" Apr 30 03:29:42.482597 kubelet[2602]: I0430 03:29:42.482137 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/14794fe0-d088-499f-b56b-8b4f8c649184-kube-proxy\") pod \"kube-proxy-5b9z2\" (UID: \"14794fe0-d088-499f-b56b-8b4f8c649184\") " pod="kube-system/kube-proxy-5b9z2" Apr 30 03:29:42.482597 kubelet[2602]: I0430 03:29:42.482171 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14794fe0-d088-499f-b56b-8b4f8c649184-lib-modules\") pod \"kube-proxy-5b9z2\" (UID: \"14794fe0-d088-499f-b56b-8b4f8c649184\") " pod="kube-system/kube-proxy-5b9z2" Apr 30 03:29:42.482597 kubelet[2602]: I0430 03:29:42.482199 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrxq7\" (UniqueName: \"kubernetes.io/projected/14794fe0-d088-499f-b56b-8b4f8c649184-kube-api-access-wrxq7\") pod \"kube-proxy-5b9z2\" (UID: \"14794fe0-d088-499f-b56b-8b4f8c649184\") " pod="kube-system/kube-proxy-5b9z2" Apr 30 03:29:42.589339 kubelet[2602]: E0430 03:29:42.589291 2602 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 03:29:42.589339 kubelet[2602]: E0430 03:29:42.589337 2602 projected.go:200] Error preparing data for projected volume kube-api-access-wrxq7 for pod kube-system/kube-proxy-5b9z2: configmap "kube-root-ca.crt" not found Apr 30 03:29:42.589649 kubelet[2602]: E0430 03:29:42.589431 2602 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/14794fe0-d088-499f-b56b-8b4f8c649184-kube-api-access-wrxq7 podName:14794fe0-d088-499f-b56b-8b4f8c649184 nodeName:}" failed. No retries permitted until 2025-04-30 03:29:43.089403578 +0000 UTC m=+15.064045319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wrxq7" (UniqueName: "kubernetes.io/projected/14794fe0-d088-499f-b56b-8b4f8c649184-kube-api-access-wrxq7") pod "kube-proxy-5b9z2" (UID: "14794fe0-d088-499f-b56b-8b4f8c649184") : configmap "kube-root-ca.crt" not found Apr 30 03:29:42.877219 kubelet[2602]: I0430 03:29:42.876418 2602 topology_manager.go:215] "Topology Admit Handler" podUID="75fb250c-a394-4b48-b016-5ad4394ead8f" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-8fw9r" Apr 30 03:29:42.895533 systemd[1]: Created slice kubepods-besteffort-pod75fb250c_a394_4b48_b016_5ad4394ead8f.slice - libcontainer container kubepods-besteffort-pod75fb250c_a394_4b48_b016_5ad4394ead8f.slice. Apr 30 03:29:42.985764 kubelet[2602]: I0430 03:29:42.985695 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75fb250c-a394-4b48-b016-5ad4394ead8f-var-lib-calico\") pod \"tigera-operator-797db67f8-8fw9r\" (UID: \"75fb250c-a394-4b48-b016-5ad4394ead8f\") " pod="tigera-operator/tigera-operator-797db67f8-8fw9r" Apr 30 03:29:42.985764 kubelet[2602]: I0430 03:29:42.985761 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmmq\" (UniqueName: \"kubernetes.io/projected/75fb250c-a394-4b48-b016-5ad4394ead8f-kube-api-access-brmmq\") pod \"tigera-operator-797db67f8-8fw9r\" (UID: \"75fb250c-a394-4b48-b016-5ad4394ead8f\") " pod="tigera-operator/tigera-operator-797db67f8-8fw9r" Apr 30 03:29:43.208842 containerd[1462]: time="2025-04-30T03:29:43.208674443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-8fw9r,Uid:75fb250c-a394-4b48-b016-5ad4394ead8f,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:29:43.247573 containerd[1462]: time="2025-04-30T03:29:43.247163618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:43.247573 containerd[1462]: time="2025-04-30T03:29:43.247260253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:43.247573 containerd[1462]: time="2025-04-30T03:29:43.247285036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:43.247573 containerd[1462]: time="2025-04-30T03:29:43.247424647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:43.284243 systemd[1]: Started cri-containerd-6f31e7951fafee9cb89872ca9aed3459826dcfcf50f4eec2f6b73946964c42d7.scope - libcontainer container 6f31e7951fafee9cb89872ca9aed3459826dcfcf50f4eec2f6b73946964c42d7. Apr 30 03:29:43.343780 containerd[1462]: time="2025-04-30T03:29:43.343612663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-8fw9r,Uid:75fb250c-a394-4b48-b016-5ad4394ead8f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6f31e7951fafee9cb89872ca9aed3459826dcfcf50f4eec2f6b73946964c42d7\"" Apr 30 03:29:43.346447 containerd[1462]: time="2025-04-30T03:29:43.346289997Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:29:43.365862 containerd[1462]: time="2025-04-30T03:29:43.365778016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5b9z2,Uid:14794fe0-d088-499f-b56b-8b4f8c649184,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:43.398188 containerd[1462]: time="2025-04-30T03:29:43.397827391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:43.398188 containerd[1462]: time="2025-04-30T03:29:43.398049251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:43.398188 containerd[1462]: time="2025-04-30T03:29:43.398086936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:43.398734 containerd[1462]: time="2025-04-30T03:29:43.398259895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:43.424184 systemd[1]: Started cri-containerd-d1e565836c50363742b809579ef23fcafc2b9fb1581d466c33f03006555a1bb2.scope - libcontainer container d1e565836c50363742b809579ef23fcafc2b9fb1581d466c33f03006555a1bb2. Apr 30 03:29:43.459645 containerd[1462]: time="2025-04-30T03:29:43.459478306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5b9z2,Uid:14794fe0-d088-499f-b56b-8b4f8c649184,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1e565836c50363742b809579ef23fcafc2b9fb1581d466c33f03006555a1bb2\"" Apr 30 03:29:43.464941 containerd[1462]: time="2025-04-30T03:29:43.464645304Z" level=info msg="CreateContainer within sandbox \"d1e565836c50363742b809579ef23fcafc2b9fb1581d466c33f03006555a1bb2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:29:43.485474 containerd[1462]: time="2025-04-30T03:29:43.485406202Z" level=info msg="CreateContainer within sandbox \"d1e565836c50363742b809579ef23fcafc2b9fb1581d466c33f03006555a1bb2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a4145c968f00d1a8cc4a4dc9801aecf17f1edd2b0b3930b70eec8b291bfc69fa\"" Apr 30 03:29:43.487919 containerd[1462]: time="2025-04-30T03:29:43.486316358Z" level=info msg="StartContainer for \"a4145c968f00d1a8cc4a4dc9801aecf17f1edd2b0b3930b70eec8b291bfc69fa\"" Apr 30 03:29:43.528165 systemd[1]: Started cri-containerd-a4145c968f00d1a8cc4a4dc9801aecf17f1edd2b0b3930b70eec8b291bfc69fa.scope - libcontainer container a4145c968f00d1a8cc4a4dc9801aecf17f1edd2b0b3930b70eec8b291bfc69fa. Apr 30 03:29:43.569444 containerd[1462]: time="2025-04-30T03:29:43.569376257Z" level=info msg="StartContainer for \"a4145c968f00d1a8cc4a4dc9801aecf17f1edd2b0b3930b70eec8b291bfc69fa\" returns successfully" Apr 30 03:29:44.369190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799036213.mount: Deactivated successfully. Apr 30 03:29:45.255412 containerd[1462]: time="2025-04-30T03:29:45.255337651Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:45.256820 containerd[1462]: time="2025-04-30T03:29:45.256742189Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:29:45.258263 containerd[1462]: time="2025-04-30T03:29:45.258182531Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:45.261402 containerd[1462]: time="2025-04-30T03:29:45.261328722Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:45.263106 containerd[1462]: time="2025-04-30T03:29:45.262407387Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.916068185s" Apr 30 03:29:45.263106 containerd[1462]: time="2025-04-30T03:29:45.262462969Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:29:45.266068 containerd[1462]: time="2025-04-30T03:29:45.265654360Z" level=info msg="CreateContainer within sandbox \"6f31e7951fafee9cb89872ca9aed3459826dcfcf50f4eec2f6b73946964c42d7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:29:45.286917 containerd[1462]: time="2025-04-30T03:29:45.286833250Z" level=info msg="CreateContainer within sandbox \"6f31e7951fafee9cb89872ca9aed3459826dcfcf50f4eec2f6b73946964c42d7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e9711da31f36a73928402da8680c4e8ce20e222c59b06fd9b76c0fee6d57958a\"" Apr 30 03:29:45.288861 containerd[1462]: time="2025-04-30T03:29:45.287653850Z" level=info msg="StartContainer for \"e9711da31f36a73928402da8680c4e8ce20e222c59b06fd9b76c0fee6d57958a\"" Apr 30 03:29:45.333274 systemd[1]: run-containerd-runc-k8s.io-e9711da31f36a73928402da8680c4e8ce20e222c59b06fd9b76c0fee6d57958a-runc.DoMb3u.mount: Deactivated successfully. Apr 30 03:29:45.342270 systemd[1]: Started cri-containerd-e9711da31f36a73928402da8680c4e8ce20e222c59b06fd9b76c0fee6d57958a.scope - libcontainer container e9711da31f36a73928402da8680c4e8ce20e222c59b06fd9b76c0fee6d57958a. Apr 30 03:29:45.381275 containerd[1462]: time="2025-04-30T03:29:45.381111759Z" level=info msg="StartContainer for \"e9711da31f36a73928402da8680c4e8ce20e222c59b06fd9b76c0fee6d57958a\" returns successfully" Apr 30 03:29:46.300818 kubelet[2602]: I0430 03:29:46.300361 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5b9z2" podStartSLOduration=4.300334975 podStartE2EDuration="4.300334975s" podCreationTimestamp="2025-04-30 03:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:44.292758365 +0000 UTC m=+16.267400114" watchObservedRunningTime="2025-04-30 03:29:46.300334975 +0000 UTC m=+18.274976743" Apr 30 03:29:48.782922 kubelet[2602]: I0430 03:29:48.782777 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-8fw9r" podStartSLOduration=4.864819397 podStartE2EDuration="6.782750129s" podCreationTimestamp="2025-04-30 03:29:42 +0000 UTC" firstStartedPulling="2025-04-30 03:29:43.345761047 +0000 UTC m=+15.320402785" lastFinishedPulling="2025-04-30 03:29:45.26369178 +0000 UTC m=+17.238333517" observedRunningTime="2025-04-30 03:29:46.302098154 +0000 UTC m=+18.276739900" watchObservedRunningTime="2025-04-30 03:29:48.782750129 +0000 UTC m=+20.757391893" Apr 30 03:29:48.785459 kubelet[2602]: I0430 03:29:48.784414 2602 topology_manager.go:215] "Topology Admit Handler" podUID="8104c962-c19c-42ff-8eb1-2545483a40fe" podNamespace="calico-system" podName="calico-typha-6889d74865-h84f7" Apr 30 03:29:48.797332 systemd[1]: Created slice kubepods-besteffort-pod8104c962_c19c_42ff_8eb1_2545483a40fe.slice - libcontainer container kubepods-besteffort-pod8104c962_c19c_42ff_8eb1_2545483a40fe.slice. Apr 30 03:29:48.822789 kubelet[2602]: I0430 03:29:48.822749 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8104c962-c19c-42ff-8eb1-2545483a40fe-tigera-ca-bundle\") pod \"calico-typha-6889d74865-h84f7\" (UID: \"8104c962-c19c-42ff-8eb1-2545483a40fe\") " pod="calico-system/calico-typha-6889d74865-h84f7" Apr 30 03:29:48.823187 kubelet[2602]: I0430 03:29:48.823148 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8104c962-c19c-42ff-8eb1-2545483a40fe-typha-certs\") pod \"calico-typha-6889d74865-h84f7\" (UID: \"8104c962-c19c-42ff-8eb1-2545483a40fe\") " pod="calico-system/calico-typha-6889d74865-h84f7" Apr 30 03:29:48.824016 kubelet[2602]: I0430 03:29:48.823574 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjwjs\" (UniqueName: \"kubernetes.io/projected/8104c962-c19c-42ff-8eb1-2545483a40fe-kube-api-access-hjwjs\") pod \"calico-typha-6889d74865-h84f7\" (UID: \"8104c962-c19c-42ff-8eb1-2545483a40fe\") " pod="calico-system/calico-typha-6889d74865-h84f7" Apr 30 03:29:49.066801 kubelet[2602]: I0430 03:29:49.066632 2602 topology_manager.go:215] "Topology Admit Handler" podUID="0fe07556-52b9-47e3-914d-856a747fb4e0" podNamespace="calico-system" podName="calico-node-rnm47" Apr 30 03:29:49.083068 systemd[1]: Created slice kubepods-besteffort-pod0fe07556_52b9_47e3_914d_856a747fb4e0.slice - libcontainer container kubepods-besteffort-pod0fe07556_52b9_47e3_914d_856a747fb4e0.slice. Apr 30 03:29:49.106943 containerd[1462]: time="2025-04-30T03:29:49.106352004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6889d74865-h84f7,Uid:8104c962-c19c-42ff-8eb1-2545483a40fe,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:49.129753 kubelet[2602]: I0430 03:29:49.129147 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-log-dir\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.129753 kubelet[2602]: I0430 03:29:49.129242 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-xtables-lock\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.129753 kubelet[2602]: I0430 03:29:49.129281 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-lib-modules\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.129753 kubelet[2602]: I0430 03:29:49.129306 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-policysync\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.129753 kubelet[2602]: I0430 03:29:49.129335 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fe07556-52b9-47e3-914d-856a747fb4e0-tigera-ca-bundle\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.130206 kubelet[2602]: I0430 03:29:49.129364 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-var-run-calico\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.130206 kubelet[2602]: I0430 03:29:49.129391 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs5xg\" (UniqueName: \"kubernetes.io/projected/0fe07556-52b9-47e3-914d-856a747fb4e0-kube-api-access-cs5xg\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.130206 kubelet[2602]: I0430 03:29:49.129424 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0fe07556-52b9-47e3-914d-856a747fb4e0-node-certs\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.130206 kubelet[2602]: I0430 03:29:49.129452 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-bin-dir\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.130206 kubelet[2602]: I0430 03:29:49.129479 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-net-dir\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.130470 kubelet[2602]: I0430 03:29:49.129508 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-var-lib-calico\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.130470 kubelet[2602]: I0430 03:29:49.129544 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-flexvol-driver-host\") pod \"calico-node-rnm47\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " pod="calico-system/calico-node-rnm47" Apr 30 03:29:49.162619 containerd[1462]: time="2025-04-30T03:29:49.162436157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:49.164772 containerd[1462]: time="2025-04-30T03:29:49.164452377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:49.164772 containerd[1462]: time="2025-04-30T03:29:49.164489813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:49.164772 containerd[1462]: time="2025-04-30T03:29:49.164625707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:49.223916 systemd[1]: Started cri-containerd-c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45.scope - libcontainer container c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45. Apr 30 03:29:49.237172 kubelet[2602]: E0430 03:29:49.237105 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.237172 kubelet[2602]: W0430 03:29:49.237143 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.237416 kubelet[2602]: E0430 03:29:49.237193 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.237591 kubelet[2602]: E0430 03:29:49.237569 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.237591 kubelet[2602]: W0430 03:29:49.237589 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.238078 kubelet[2602]: E0430 03:29:49.237609 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.238078 kubelet[2602]: E0430 03:29:49.237944 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.238078 kubelet[2602]: W0430 03:29:49.237958 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.238078 kubelet[2602]: E0430 03:29:49.237978 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.240726 kubelet[2602]: E0430 03:29:49.240684 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.240726 kubelet[2602]: W0430 03:29:49.240717 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.241001 kubelet[2602]: E0430 03:29:49.240750 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.242202 kubelet[2602]: E0430 03:29:49.242166 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.242202 kubelet[2602]: W0430 03:29:49.242190 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.242492 kubelet[2602]: E0430 03:29:49.242212 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.243582 kubelet[2602]: E0430 03:29:49.243552 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.243582 kubelet[2602]: W0430 03:29:49.243576 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.243739 kubelet[2602]: E0430 03:29:49.243597 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.259084 kubelet[2602]: I0430 03:29:49.259029 2602 topology_manager.go:215] "Topology Admit Handler" podUID="afe01694-9e56-4cfa-9fa0-0fe8aaed621f" podNamespace="calico-system" podName="csi-node-driver-rbw9s" Apr 30 03:29:49.259528 kubelet[2602]: E0430 03:29:49.259470 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbw9s" podUID="afe01694-9e56-4cfa-9fa0-0fe8aaed621f" Apr 30 03:29:49.265257 kubelet[2602]: E0430 03:29:49.265211 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.265257 kubelet[2602]: W0430 03:29:49.265249 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.265492 kubelet[2602]: E0430 03:29:49.265281 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.312553 kubelet[2602]: E0430 03:29:49.312506 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.312553 kubelet[2602]: W0430 03:29:49.312550 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.312790 kubelet[2602]: E0430 03:29:49.312581 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.312967 kubelet[2602]: E0430 03:29:49.312944 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.312967 kubelet[2602]: W0430 03:29:49.312966 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.313172 kubelet[2602]: E0430 03:29:49.312986 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.313362 kubelet[2602]: E0430 03:29:49.313338 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.313362 kubelet[2602]: W0430 03:29:49.313362 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.313520 kubelet[2602]: E0430 03:29:49.313380 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.314221 kubelet[2602]: E0430 03:29:49.314181 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.314221 kubelet[2602]: W0430 03:29:49.314202 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.314221 kubelet[2602]: E0430 03:29:49.314220 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.314851 kubelet[2602]: E0430 03:29:49.314564 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.314851 kubelet[2602]: W0430 03:29:49.314579 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.314851 kubelet[2602]: E0430 03:29:49.314595 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.315265 kubelet[2602]: E0430 03:29:49.314970 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.315265 kubelet[2602]: W0430 03:29:49.314986 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.315265 kubelet[2602]: E0430 03:29:49.315004 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.316250 kubelet[2602]: E0430 03:29:49.316227 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.316250 kubelet[2602]: W0430 03:29:49.316247 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.316404 kubelet[2602]: E0430 03:29:49.316269 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.316992 kubelet[2602]: E0430 03:29:49.316614 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.316992 kubelet[2602]: W0430 03:29:49.316631 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.316992 kubelet[2602]: E0430 03:29:49.316649 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.319234 kubelet[2602]: E0430 03:29:49.318974 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.319234 kubelet[2602]: W0430 03:29:49.318992 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.319234 kubelet[2602]: E0430 03:29:49.319009 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.320061 kubelet[2602]: E0430 03:29:49.319953 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.320061 kubelet[2602]: W0430 03:29:49.319971 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.320061 kubelet[2602]: E0430 03:29:49.319988 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.321905 kubelet[2602]: E0430 03:29:49.321293 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.321905 kubelet[2602]: W0430 03:29:49.321314 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.321905 kubelet[2602]: E0430 03:29:49.321331 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.323250 kubelet[2602]: E0430 03:29:49.323217 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.323250 kubelet[2602]: W0430 03:29:49.323243 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.323429 kubelet[2602]: E0430 03:29:49.323264 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.324679 kubelet[2602]: E0430 03:29:49.324652 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.324679 kubelet[2602]: W0430 03:29:49.324675 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.324830 kubelet[2602]: E0430 03:29:49.324694 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.326473 kubelet[2602]: E0430 03:29:49.326445 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.326473 kubelet[2602]: W0430 03:29:49.326469 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.326633 kubelet[2602]: E0430 03:29:49.326488 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.328596 kubelet[2602]: E0430 03:29:49.327128 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.328596 kubelet[2602]: W0430 03:29:49.327161 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.328596 kubelet[2602]: E0430 03:29:49.327180 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.329218 kubelet[2602]: E0430 03:29:49.329022 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.329218 kubelet[2602]: W0430 03:29:49.329042 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.329218 kubelet[2602]: E0430 03:29:49.329060 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.329797 kubelet[2602]: E0430 03:29:49.329636 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.329797 kubelet[2602]: W0430 03:29:49.329656 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.329797 kubelet[2602]: E0430 03:29:49.329674 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.330410 kubelet[2602]: E0430 03:29:49.330213 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.330410 kubelet[2602]: W0430 03:29:49.330230 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.330410 kubelet[2602]: E0430 03:29:49.330248 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.330930 kubelet[2602]: E0430 03:29:49.330811 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.330930 kubelet[2602]: W0430 03:29:49.330829 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.330930 kubelet[2602]: E0430 03:29:49.330847 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.331731 kubelet[2602]: E0430 03:29:49.331469 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.331731 kubelet[2602]: W0430 03:29:49.331486 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.331731 kubelet[2602]: E0430 03:29:49.331503 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.332621 kubelet[2602]: E0430 03:29:49.332452 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.332621 kubelet[2602]: W0430 03:29:49.332470 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.332621 kubelet[2602]: E0430 03:29:49.332488 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.332621 kubelet[2602]: I0430 03:29:49.332538 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/afe01694-9e56-4cfa-9fa0-0fe8aaed621f-socket-dir\") pod \"csi-node-driver-rbw9s\" (UID: \"afe01694-9e56-4cfa-9fa0-0fe8aaed621f\") " pod="calico-system/csi-node-driver-rbw9s" Apr 30 03:29:49.333617 kubelet[2602]: E0430 03:29:49.333338 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.333617 kubelet[2602]: W0430 03:29:49.333359 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.333617 kubelet[2602]: E0430 03:29:49.333385 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.334126 kubelet[2602]: E0430 03:29:49.334053 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.334126 kubelet[2602]: W0430 03:29:49.334077 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.334126 kubelet[2602]: E0430 03:29:49.334095 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.334126 kubelet[2602]: I0430 03:29:49.334095 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cx8p\" (UniqueName: \"kubernetes.io/projected/afe01694-9e56-4cfa-9fa0-0fe8aaed621f-kube-api-access-7cx8p\") pod \"csi-node-driver-rbw9s\" (UID: \"afe01694-9e56-4cfa-9fa0-0fe8aaed621f\") " pod="calico-system/csi-node-driver-rbw9s" Apr 30 03:29:49.335619 kubelet[2602]: E0430 03:29:49.335587 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.335619 kubelet[2602]: W0430 03:29:49.335612 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.335619 kubelet[2602]: E0430 03:29:49.335633 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.336303 kubelet[2602]: E0430 03:29:49.336279 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.336303 kubelet[2602]: W0430 03:29:49.336301 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.336997 kubelet[2602]: E0430 03:29:49.336330 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.337301 kubelet[2602]: E0430 03:29:49.337278 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.337301 kubelet[2602]: W0430 03:29:49.337300 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.337532 kubelet[2602]: E0430 03:29:49.337390 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.337711 kubelet[2602]: E0430 03:29:49.337639 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.337711 kubelet[2602]: W0430 03:29:49.337651 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.337711 kubelet[2602]: E0430 03:29:49.337667 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.337711 kubelet[2602]: I0430 03:29:49.337704 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/afe01694-9e56-4cfa-9fa0-0fe8aaed621f-registration-dir\") pod \"csi-node-driver-rbw9s\" (UID: \"afe01694-9e56-4cfa-9fa0-0fe8aaed621f\") " pod="calico-system/csi-node-driver-rbw9s" Apr 30 03:29:49.338625 kubelet[2602]: E0430 03:29:49.338598 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.338625 kubelet[2602]: W0430 03:29:49.338623 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.339092 kubelet[2602]: E0430 03:29:49.338648 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.339092 kubelet[2602]: I0430 03:29:49.338679 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afe01694-9e56-4cfa-9fa0-0fe8aaed621f-kubelet-dir\") pod \"csi-node-driver-rbw9s\" (UID: \"afe01694-9e56-4cfa-9fa0-0fe8aaed621f\") " pod="calico-system/csi-node-driver-rbw9s" Apr 30 03:29:49.339513 kubelet[2602]: E0430 03:29:49.339484 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.339513 kubelet[2602]: W0430 03:29:49.339510 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.339722 kubelet[2602]: E0430 03:29:49.339539 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.339722 kubelet[2602]: I0430 03:29:49.339568 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/afe01694-9e56-4cfa-9fa0-0fe8aaed621f-varrun\") pod \"csi-node-driver-rbw9s\" (UID: \"afe01694-9e56-4cfa-9fa0-0fe8aaed621f\") " pod="calico-system/csi-node-driver-rbw9s" Apr 30 03:29:49.340263 kubelet[2602]: E0430 03:29:49.340235 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.340263 kubelet[2602]: W0430 03:29:49.340260 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.340670 kubelet[2602]: E0430 03:29:49.340364 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.340992 kubelet[2602]: E0430 03:29:49.340773 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.340992 kubelet[2602]: W0430 03:29:49.340840 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.341424 kubelet[2602]: E0430 03:29:49.341123 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.343167 kubelet[2602]: E0430 03:29:49.343097 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.343167 kubelet[2602]: W0430 03:29:49.343117 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.343729 kubelet[2602]: E0430 03:29:49.343276 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.343729 kubelet[2602]: E0430 03:29:49.343541 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.343729 kubelet[2602]: W0430 03:29:49.343555 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.343729 kubelet[2602]: E0430 03:29:49.343572 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.344801 kubelet[2602]: E0430 03:29:49.343945 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.344801 kubelet[2602]: W0430 03:29:49.343959 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.344801 kubelet[2602]: E0430 03:29:49.343975 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.344801 kubelet[2602]: E0430 03:29:49.344376 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.344801 kubelet[2602]: W0430 03:29:49.344402 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.344801 kubelet[2602]: E0430 03:29:49.344419 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.401629 containerd[1462]: time="2025-04-30T03:29:49.401525895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rnm47,Uid:0fe07556-52b9-47e3-914d-856a747fb4e0,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:49.441423 kubelet[2602]: E0430 03:29:49.441296 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.441423 kubelet[2602]: W0430 03:29:49.441346 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.441423 kubelet[2602]: E0430 03:29:49.441377 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.443166 kubelet[2602]: E0430 03:29:49.442636 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.443166 kubelet[2602]: W0430 03:29:49.442665 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.443166 kubelet[2602]: E0430 03:29:49.442715 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.444211 kubelet[2602]: E0430 03:29:49.443910 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.444211 kubelet[2602]: W0430 03:29:49.443933 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.444211 kubelet[2602]: E0430 03:29:49.444074 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.445522 kubelet[2602]: E0430 03:29:49.444996 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.445522 kubelet[2602]: W0430 03:29:49.445015 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.445522 kubelet[2602]: E0430 03:29:49.445114 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.446338 kubelet[2602]: E0430 03:29:49.446174 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.446338 kubelet[2602]: W0430 03:29:49.446193 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.447064 kubelet[2602]: E0430 03:29:49.446874 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.448055 kubelet[2602]: E0430 03:29:49.447874 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.448055 kubelet[2602]: W0430 03:29:49.447959 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.448055 kubelet[2602]: E0430 03:29:49.448001 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.449585 kubelet[2602]: E0430 03:29:49.449103 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.449585 kubelet[2602]: W0430 03:29:49.449121 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.449585 kubelet[2602]: E0430 03:29:49.449387 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.452654 kubelet[2602]: E0430 03:29:49.451142 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.452654 kubelet[2602]: W0430 03:29:49.451162 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.453006 kubelet[2602]: E0430 03:29:49.452912 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.453455 kubelet[2602]: E0430 03:29:49.453434 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.453950 kubelet[2602]: W0430 03:29:49.453565 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.453950 kubelet[2602]: E0430 03:29:49.453716 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.454188 kubelet[2602]: E0430 03:29:49.454172 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.454278 kubelet[2602]: W0430 03:29:49.454264 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.454458 kubelet[2602]: E0430 03:29:49.454442 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.455197 kubelet[2602]: E0430 03:29:49.455179 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.455314 kubelet[2602]: W0430 03:29:49.455301 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.456616 kubelet[2602]: E0430 03:29:49.456488 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.457061 kubelet[2602]: E0430 03:29:49.457042 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.457360 kubelet[2602]: W0430 03:29:49.457185 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.457360 kubelet[2602]: E0430 03:29:49.457326 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.458122 kubelet[2602]: E0430 03:29:49.458100 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.458245 kubelet[2602]: W0430 03:29:49.458229 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.458424 kubelet[2602]: E0430 03:29:49.458409 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.459684 kubelet[2602]: E0430 03:29:49.459539 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.459684 kubelet[2602]: W0430 03:29:49.459559 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.460702 kubelet[2602]: E0430 03:29:49.459945 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.461490 kubelet[2602]: E0430 03:29:49.461242 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.461490 kubelet[2602]: W0430 03:29:49.461261 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.462748 kubelet[2602]: E0430 03:29:49.461933 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.463089 kubelet[2602]: E0430 03:29:49.462920 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.463089 kubelet[2602]: W0430 03:29:49.462939 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.463576 kubelet[2602]: E0430 03:29:49.463354 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.463900 kubelet[2602]: E0430 03:29:49.463805 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.463900 kubelet[2602]: W0430 03:29:49.463822 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.464452 kubelet[2602]: E0430 03:29:49.464290 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.465380 kubelet[2602]: E0430 03:29:49.465164 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.465380 kubelet[2602]: W0430 03:29:49.465183 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.465772 kubelet[2602]: E0430 03:29:49.465590 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.466252 kubelet[2602]: E0430 03:29:49.466234 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.466565 kubelet[2602]: W0430 03:29:49.466367 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.466960 kubelet[2602]: E0430 03:29:49.466712 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.467748 kubelet[2602]: E0430 03:29:49.467419 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.467748 kubelet[2602]: W0430 03:29:49.467436 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.468193 kubelet[2602]: E0430 03:29:49.468169 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.468616 kubelet[2602]: E0430 03:29:49.468515 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.468616 kubelet[2602]: W0430 03:29:49.468531 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.469172 kubelet[2602]: E0430 03:29:49.468943 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.469819 kubelet[2602]: E0430 03:29:49.469803 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.470263 kubelet[2602]: W0430 03:29:49.470219 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.470709 kubelet[2602]: E0430 03:29:49.470619 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.471451 kubelet[2602]: E0430 03:29:49.471435 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.471739 kubelet[2602]: W0430 03:29:49.471567 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.472688 containerd[1462]: time="2025-04-30T03:29:49.472351526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6889d74865-h84f7,Uid:8104c962-c19c-42ff-8eb1-2545483a40fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\"" Apr 30 03:29:49.472787 kubelet[2602]: E0430 03:29:49.472497 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.473605 kubelet[2602]: E0430 03:29:49.473585 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.475013 kubelet[2602]: W0430 03:29:49.474172 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.475013 kubelet[2602]: E0430 03:29:49.474205 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.475937 kubelet[2602]: E0430 03:29:49.475915 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.476271 kubelet[2602]: W0430 03:29:49.476105 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.476271 kubelet[2602]: E0430 03:29:49.476145 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.482151 containerd[1462]: time="2025-04-30T03:29:49.481786196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:29:49.486454 containerd[1462]: time="2025-04-30T03:29:49.486288673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:49.489207 containerd[1462]: time="2025-04-30T03:29:49.487556637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:49.489207 containerd[1462]: time="2025-04-30T03:29:49.488914135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:49.489570 containerd[1462]: time="2025-04-30T03:29:49.489497950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:49.504688 kubelet[2602]: E0430 03:29:49.504652 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:49.505250 kubelet[2602]: W0430 03:29:49.504881 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:49.505250 kubelet[2602]: E0430 03:29:49.505035 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:49.528792 systemd[1]: Started cri-containerd-74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91.scope - libcontainer container 74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91. Apr 30 03:29:49.585296 containerd[1462]: time="2025-04-30T03:29:49.585146880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rnm47,Uid:0fe07556-52b9-47e3-914d-856a747fb4e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\"" Apr 30 03:29:49.953286 systemd[1]: run-containerd-runc-k8s.io-c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45-runc.EebKjG.mount: Deactivated successfully. Apr 30 03:29:51.189581 kubelet[2602]: E0430 03:29:51.189528 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbw9s" podUID="afe01694-9e56-4cfa-9fa0-0fe8aaed621f" Apr 30 03:29:51.343309 containerd[1462]: time="2025-04-30T03:29:51.343236352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:51.344642 containerd[1462]: time="2025-04-30T03:29:51.344561250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:29:51.346835 containerd[1462]: time="2025-04-30T03:29:51.346759258Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:51.350348 containerd[1462]: time="2025-04-30T03:29:51.350239150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:51.351471 containerd[1462]: time="2025-04-30T03:29:51.351416037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 1.869575139s" Apr 30 03:29:51.352626 containerd[1462]: time="2025-04-30T03:29:51.351473043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:29:51.353990 containerd[1462]: time="2025-04-30T03:29:51.353938856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:29:51.372053 containerd[1462]: time="2025-04-30T03:29:51.371992534Z" level=info msg="CreateContainer within sandbox \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:29:51.402334 containerd[1462]: time="2025-04-30T03:29:51.400451503Z" level=info msg="CreateContainer within sandbox \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\"" Apr 30 03:29:51.403989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4198259762.mount: Deactivated successfully. Apr 30 03:29:51.407930 containerd[1462]: time="2025-04-30T03:29:51.407605193Z" level=info msg="StartContainer for \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\"" Apr 30 03:29:51.466245 systemd[1]: Started cri-containerd-0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f.scope - libcontainer container 0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f. Apr 30 03:29:51.532811 containerd[1462]: time="2025-04-30T03:29:51.532736669Z" level=info msg="StartContainer for \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\" returns successfully" Apr 30 03:29:52.321777 containerd[1462]: time="2025-04-30T03:29:52.321717937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:52.327013 containerd[1462]: time="2025-04-30T03:29:52.326932965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:29:52.331805 containerd[1462]: time="2025-04-30T03:29:52.327526007Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:52.336393 containerd[1462]: time="2025-04-30T03:29:52.336329638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:52.338392 containerd[1462]: time="2025-04-30T03:29:52.338325582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 984.326471ms" Apr 30 03:29:52.338392 containerd[1462]: time="2025-04-30T03:29:52.338394637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:29:52.349183 containerd[1462]: time="2025-04-30T03:29:52.349124284Z" level=info msg="CreateContainer within sandbox \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:29:52.356971 kubelet[2602]: E0430 03:29:52.356923 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.356971 kubelet[2602]: W0430 03:29:52.356960 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.357626 kubelet[2602]: E0430 03:29:52.357007 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.361027 kubelet[2602]: E0430 03:29:52.357702 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.361027 kubelet[2602]: W0430 03:29:52.357727 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.361027 kubelet[2602]: E0430 03:29:52.357931 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.361027 kubelet[2602]: E0430 03:29:52.360258 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.361027 kubelet[2602]: W0430 03:29:52.360278 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.364202 kubelet[2602]: E0430 03:29:52.360302 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.364202 kubelet[2602]: E0430 03:29:52.362474 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.364202 kubelet[2602]: W0430 03:29:52.362494 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.364202 kubelet[2602]: E0430 03:29:52.362541 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.365335 kubelet[2602]: E0430 03:29:52.365011 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.365335 kubelet[2602]: W0430 03:29:52.365038 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.365335 kubelet[2602]: E0430 03:29:52.365087 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.365596 kubelet[2602]: E0430 03:29:52.365517 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.365596 kubelet[2602]: W0430 03:29:52.365533 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.365596 kubelet[2602]: E0430 03:29:52.365554 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.367609 kubelet[2602]: E0430 03:29:52.365933 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.367609 kubelet[2602]: W0430 03:29:52.365952 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.367609 kubelet[2602]: E0430 03:29:52.365974 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.368604 kubelet[2602]: E0430 03:29:52.368579 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.368604 kubelet[2602]: W0430 03:29:52.368603 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.369284 kubelet[2602]: E0430 03:29:52.368627 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.370341 kubelet[2602]: E0430 03:29:52.370308 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.370341 kubelet[2602]: W0430 03:29:52.370339 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.371103 kubelet[2602]: E0430 03:29:52.370361 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.371487 kubelet[2602]: E0430 03:29:52.371205 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.371487 kubelet[2602]: W0430 03:29:52.371225 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.371487 kubelet[2602]: E0430 03:29:52.371245 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.374258 kubelet[2602]: E0430 03:29:52.373951 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.374258 kubelet[2602]: W0430 03:29:52.373969 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.374258 kubelet[2602]: E0430 03:29:52.373996 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.374864 kubelet[2602]: E0430 03:29:52.374829 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.375204 kubelet[2602]: W0430 03:29:52.375013 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.375204 kubelet[2602]: E0430 03:29:52.375043 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.375849 kubelet[2602]: E0430 03:29:52.375648 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.375849 kubelet[2602]: W0430 03:29:52.375667 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.375849 kubelet[2602]: E0430 03:29:52.375685 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.377617 kubelet[2602]: E0430 03:29:52.377452 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.377617 kubelet[2602]: W0430 03:29:52.377472 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.377617 kubelet[2602]: E0430 03:29:52.377490 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.378269 kubelet[2602]: E0430 03:29:52.378082 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.378269 kubelet[2602]: W0430 03:29:52.378111 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.378269 kubelet[2602]: E0430 03:29:52.378131 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.379969 kubelet[2602]: E0430 03:29:52.379944 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.379969 kubelet[2602]: W0430 03:29:52.379968 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.379969 kubelet[2602]: E0430 03:29:52.379989 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.380575 kubelet[2602]: E0430 03:29:52.380334 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.380575 kubelet[2602]: W0430 03:29:52.380352 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.380575 kubelet[2602]: E0430 03:29:52.380369 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.382217 kubelet[2602]: E0430 03:29:52.382145 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.382217 kubelet[2602]: W0430 03:29:52.382162 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.382217 kubelet[2602]: E0430 03:29:52.382182 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.382543 kubelet[2602]: E0430 03:29:52.382526 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.382685 kubelet[2602]: W0430 03:29:52.382545 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.382685 kubelet[2602]: E0430 03:29:52.382563 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.382879 kubelet[2602]: E0430 03:29:52.382858 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.382962 kubelet[2602]: W0430 03:29:52.382877 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.382962 kubelet[2602]: E0430 03:29:52.382921 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.383269 kubelet[2602]: E0430 03:29:52.383214 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.383269 kubelet[2602]: W0430 03:29:52.383228 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.383269 kubelet[2602]: E0430 03:29:52.383245 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.385928 kubelet[2602]: E0430 03:29:52.384116 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.385928 kubelet[2602]: W0430 03:29:52.384137 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.385928 kubelet[2602]: E0430 03:29:52.384155 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.386354 kubelet[2602]: E0430 03:29:52.386143 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.386354 kubelet[2602]: W0430 03:29:52.386165 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.386354 kubelet[2602]: E0430 03:29:52.386183 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.388678 kubelet[2602]: E0430 03:29:52.386679 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.388678 kubelet[2602]: W0430 03:29:52.386697 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.388678 kubelet[2602]: E0430 03:29:52.386745 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.394804 kubelet[2602]: E0430 03:29:52.394752 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.394804 kubelet[2602]: W0430 03:29:52.394796 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.395065 kubelet[2602]: E0430 03:29:52.394826 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.397223 kubelet[2602]: E0430 03:29:52.397171 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.397389 kubelet[2602]: W0430 03:29:52.397238 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.397801 containerd[1462]: time="2025-04-30T03:29:52.397722294Z" level=info msg="CreateContainer within sandbox \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e\"" Apr 30 03:29:52.398912 kubelet[2602]: E0430 03:29:52.398730 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.398912 kubelet[2602]: W0430 03:29:52.398853 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.399105 kubelet[2602]: E0430 03:29:52.398881 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.399302 kubelet[2602]: E0430 03:29:52.399168 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.399994 containerd[1462]: time="2025-04-30T03:29:52.399955298Z" level=info msg="StartContainer for \"64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e\"" Apr 30 03:29:52.402788 kubelet[2602]: E0430 03:29:52.401617 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.402788 kubelet[2602]: W0430 03:29:52.401643 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.402788 kubelet[2602]: E0430 03:29:52.401692 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.402788 kubelet[2602]: E0430 03:29:52.402187 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.402788 kubelet[2602]: W0430 03:29:52.402202 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.403202 kubelet[2602]: E0430 03:29:52.403125 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.406490 kubelet[2602]: E0430 03:29:52.403476 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.406490 kubelet[2602]: W0430 03:29:52.403493 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.406490 kubelet[2602]: E0430 03:29:52.403536 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.406490 kubelet[2602]: E0430 03:29:52.405729 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.406490 kubelet[2602]: W0430 03:29:52.405748 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.406490 kubelet[2602]: E0430 03:29:52.405792 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.407349 kubelet[2602]: E0430 03:29:52.407321 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.407477 kubelet[2602]: W0430 03:29:52.407355 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.407477 kubelet[2602]: E0430 03:29:52.407399 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.411073 kubelet[2602]: E0430 03:29:52.408972 2602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:52.411073 kubelet[2602]: W0430 03:29:52.408997 2602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:52.411073 kubelet[2602]: E0430 03:29:52.409037 2602 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:52.497201 systemd[1]: Started cri-containerd-64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e.scope - libcontainer container 64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e. Apr 30 03:29:52.572029 containerd[1462]: time="2025-04-30T03:29:52.570850067Z" level=info msg="StartContainer for \"64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e\" returns successfully" Apr 30 03:29:52.596142 systemd[1]: cri-containerd-64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e.scope: Deactivated successfully. Apr 30 03:29:52.636413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e-rootfs.mount: Deactivated successfully. Apr 30 03:29:53.189435 kubelet[2602]: E0430 03:29:53.189362 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbw9s" podUID="afe01694-9e56-4cfa-9fa0-0fe8aaed621f" Apr 30 03:29:53.206574 containerd[1462]: time="2025-04-30T03:29:53.206277526Z" level=info msg="shim disconnected" id=64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e namespace=k8s.io Apr 30 03:29:53.206574 containerd[1462]: time="2025-04-30T03:29:53.206426885Z" level=warning msg="cleaning up after shim disconnected" id=64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e namespace=k8s.io Apr 30 03:29:53.206574 containerd[1462]: time="2025-04-30T03:29:53.206445667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:53.319368 kubelet[2602]: I0430 03:29:53.319322 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:53.322266 containerd[1462]: time="2025-04-30T03:29:53.321468187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:29:53.342715 kubelet[2602]: I0430 03:29:53.342589 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6889d74865-h84f7" podStartSLOduration=3.469885327 podStartE2EDuration="5.342559607s" podCreationTimestamp="2025-04-30 03:29:48 +0000 UTC" firstStartedPulling="2025-04-30 03:29:49.481048793 +0000 UTC m=+21.455690531" lastFinishedPulling="2025-04-30 03:29:51.353722816 +0000 UTC m=+23.328364811" observedRunningTime="2025-04-30 03:29:52.356199556 +0000 UTC m=+24.330841302" watchObservedRunningTime="2025-04-30 03:29:53.342559607 +0000 UTC m=+25.317201353" Apr 30 03:29:55.188963 kubelet[2602]: E0430 03:29:55.188901 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbw9s" podUID="afe01694-9e56-4cfa-9fa0-0fe8aaed621f" Apr 30 03:29:57.189435 kubelet[2602]: E0430 03:29:57.189348 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbw9s" podUID="afe01694-9e56-4cfa-9fa0-0fe8aaed621f" Apr 30 03:29:57.303905 containerd[1462]: time="2025-04-30T03:29:57.303789871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:57.305214 containerd[1462]: time="2025-04-30T03:29:57.305138512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:29:57.306748 containerd[1462]: time="2025-04-30T03:29:57.306678706Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:57.310029 containerd[1462]: time="2025-04-30T03:29:57.309946586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:57.311855 containerd[1462]: time="2025-04-30T03:29:57.311001054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.989475909s" Apr 30 03:29:57.311855 containerd[1462]: time="2025-04-30T03:29:57.311047715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:29:57.314279 containerd[1462]: time="2025-04-30T03:29:57.314234799Z" level=info msg="CreateContainer within sandbox \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:29:57.336523 containerd[1462]: time="2025-04-30T03:29:57.336457379Z" level=info msg="CreateContainer within sandbox \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1\"" Apr 30 03:29:57.339280 containerd[1462]: time="2025-04-30T03:29:57.338968808Z" level=info msg="StartContainer for \"f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1\"" Apr 30 03:29:57.400194 systemd[1]: Started cri-containerd-f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1.scope - libcontainer container f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1. Apr 30 03:29:57.442997 containerd[1462]: time="2025-04-30T03:29:57.442760554Z" level=info msg="StartContainer for \"f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1\" returns successfully" Apr 30 03:29:58.537792 containerd[1462]: time="2025-04-30T03:29:58.537676449Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:29:58.539979 systemd[1]: cri-containerd-f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1.scope: Deactivated successfully. Apr 30 03:29:58.565310 kubelet[2602]: I0430 03:29:58.565000 2602 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:29:58.583833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1-rootfs.mount: Deactivated successfully. Apr 30 03:29:58.611333 kubelet[2602]: I0430 03:29:58.609206 2602 topology_manager.go:215] "Topology Admit Handler" podUID="b8dfc8e0-f268-4281-b376-50f8468daeb0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j4zx2" Apr 30 03:29:58.616295 kubelet[2602]: I0430 03:29:58.615810 2602 topology_manager.go:215] "Topology Admit Handler" podUID="964497a6-75e1-47e5-836b-3b870a46fee8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gv4cp" Apr 30 03:29:58.618312 kubelet[2602]: I0430 03:29:58.618273 2602 topology_manager.go:215] "Topology Admit Handler" podUID="4490f834-9862-47b3-94e9-1a6cf67f5b80" podNamespace="calico-system" podName="calico-kube-controllers-64db747896-slftr" Apr 30 03:29:58.629670 kubelet[2602]: I0430 03:29:58.628909 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc6xm\" (UniqueName: \"kubernetes.io/projected/4490f834-9862-47b3-94e9-1a6cf67f5b80-kube-api-access-tc6xm\") pod \"calico-kube-controllers-64db747896-slftr\" (UID: \"4490f834-9862-47b3-94e9-1a6cf67f5b80\") " pod="calico-system/calico-kube-controllers-64db747896-slftr" Apr 30 03:29:58.629670 kubelet[2602]: I0430 03:29:58.628973 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87jc\" (UniqueName: \"kubernetes.io/projected/964497a6-75e1-47e5-836b-3b870a46fee8-kube-api-access-p87jc\") pod \"coredns-7db6d8ff4d-gv4cp\" (UID: \"964497a6-75e1-47e5-836b-3b870a46fee8\") " pod="kube-system/coredns-7db6d8ff4d-gv4cp" Apr 30 03:29:58.629670 kubelet[2602]: I0430 03:29:58.629011 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8dfc8e0-f268-4281-b376-50f8468daeb0-config-volume\") pod \"coredns-7db6d8ff4d-j4zx2\" (UID: \"b8dfc8e0-f268-4281-b376-50f8468daeb0\") " pod="kube-system/coredns-7db6d8ff4d-j4zx2" Apr 30 03:29:58.629670 kubelet[2602]: I0430 03:29:58.629039 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l89vt\" (UniqueName: \"kubernetes.io/projected/b8dfc8e0-f268-4281-b376-50f8468daeb0-kube-api-access-l89vt\") pod \"coredns-7db6d8ff4d-j4zx2\" (UID: \"b8dfc8e0-f268-4281-b376-50f8468daeb0\") " pod="kube-system/coredns-7db6d8ff4d-j4zx2" Apr 30 03:29:58.629670 kubelet[2602]: I0430 03:29:58.629076 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4490f834-9862-47b3-94e9-1a6cf67f5b80-tigera-ca-bundle\") pod \"calico-kube-controllers-64db747896-slftr\" (UID: \"4490f834-9862-47b3-94e9-1a6cf67f5b80\") " pod="calico-system/calico-kube-controllers-64db747896-slftr" Apr 30 03:29:58.631310 kubelet[2602]: I0430 03:29:58.629106 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/964497a6-75e1-47e5-836b-3b870a46fee8-config-volume\") pod \"coredns-7db6d8ff4d-gv4cp\" (UID: \"964497a6-75e1-47e5-836b-3b870a46fee8\") " pod="kube-system/coredns-7db6d8ff4d-gv4cp" Apr 30 03:29:58.635261 systemd[1]: Created slice kubepods-burstable-podb8dfc8e0_f268_4281_b376_50f8468daeb0.slice - libcontainer container kubepods-burstable-podb8dfc8e0_f268_4281_b376_50f8468daeb0.slice. Apr 30 03:29:58.639008 kubelet[2602]: I0430 03:29:58.636213 2602 topology_manager.go:215] "Topology Admit Handler" podUID="9bbf289e-3bd8-4b3c-9652-ef642934c0ca" podNamespace="calico-apiserver" podName="calico-apiserver-74fd85f4d9-979dw" Apr 30 03:29:58.650485 systemd[1]: Created slice kubepods-burstable-pod964497a6_75e1_47e5_836b_3b870a46fee8.slice - libcontainer container kubepods-burstable-pod964497a6_75e1_47e5_836b_3b870a46fee8.slice. Apr 30 03:29:58.662440 kubelet[2602]: I0430 03:29:58.662389 2602 topology_manager.go:215] "Topology Admit Handler" podUID="42a679bb-d883-4cd3-a4bf-74c95efe17a5" podNamespace="calico-apiserver" podName="calico-apiserver-74dfd89d4c-ndtsq" Apr 30 03:29:58.667468 systemd[1]: Created slice kubepods-besteffort-pod4490f834_9862_47b3_94e9_1a6cf67f5b80.slice - libcontainer container kubepods-besteffort-pod4490f834_9862_47b3_94e9_1a6cf67f5b80.slice. Apr 30 03:29:58.672315 kubelet[2602]: I0430 03:29:58.670057 2602 topology_manager.go:215] "Topology Admit Handler" podUID="87f3a9e5-5fac-471b-a36c-1452742abca5" podNamespace="calico-apiserver" podName="calico-apiserver-74fd85f4d9-5qtrf" Apr 30 03:29:58.686085 systemd[1]: Created slice kubepods-besteffort-pod9bbf289e_3bd8_4b3c_9652_ef642934c0ca.slice - libcontainer container kubepods-besteffort-pod9bbf289e_3bd8_4b3c_9652_ef642934c0ca.slice. Apr 30 03:29:58.701993 systemd[1]: Created slice kubepods-besteffort-pod42a679bb_d883_4cd3_a4bf_74c95efe17a5.slice - libcontainer container kubepods-besteffort-pod42a679bb_d883_4cd3_a4bf_74c95efe17a5.slice. Apr 30 03:29:58.716222 systemd[1]: Created slice kubepods-besteffort-pod87f3a9e5_5fac_471b_a36c_1452742abca5.slice - libcontainer container kubepods-besteffort-pod87f3a9e5_5fac_471b_a36c_1452742abca5.slice. Apr 30 03:29:58.730270 kubelet[2602]: I0430 03:29:58.730207 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzbdz\" (UniqueName: \"kubernetes.io/projected/9bbf289e-3bd8-4b3c-9652-ef642934c0ca-kube-api-access-jzbdz\") pod \"calico-apiserver-74fd85f4d9-979dw\" (UID: \"9bbf289e-3bd8-4b3c-9652-ef642934c0ca\") " pod="calico-apiserver/calico-apiserver-74fd85f4d9-979dw" Apr 30 03:29:58.730485 kubelet[2602]: I0430 03:29:58.730294 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bbf289e-3bd8-4b3c-9652-ef642934c0ca-calico-apiserver-certs\") pod \"calico-apiserver-74fd85f4d9-979dw\" (UID: \"9bbf289e-3bd8-4b3c-9652-ef642934c0ca\") " pod="calico-apiserver/calico-apiserver-74fd85f4d9-979dw" Apr 30 03:29:58.730485 kubelet[2602]: I0430 03:29:58.730326 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm7z8\" (UniqueName: \"kubernetes.io/projected/42a679bb-d883-4cd3-a4bf-74c95efe17a5-kube-api-access-xm7z8\") pod \"calico-apiserver-74dfd89d4c-ndtsq\" (UID: \"42a679bb-d883-4cd3-a4bf-74c95efe17a5\") " pod="calico-apiserver/calico-apiserver-74dfd89d4c-ndtsq" Apr 30 03:29:58.730485 kubelet[2602]: I0430 03:29:58.730351 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/87f3a9e5-5fac-471b-a36c-1452742abca5-calico-apiserver-certs\") pod \"calico-apiserver-74fd85f4d9-5qtrf\" (UID: \"87f3a9e5-5fac-471b-a36c-1452742abca5\") " pod="calico-apiserver/calico-apiserver-74fd85f4d9-5qtrf" Apr 30 03:29:58.730485 kubelet[2602]: I0430 03:29:58.730375 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42a679bb-d883-4cd3-a4bf-74c95efe17a5-calico-apiserver-certs\") pod \"calico-apiserver-74dfd89d4c-ndtsq\" (UID: \"42a679bb-d883-4cd3-a4bf-74c95efe17a5\") " pod="calico-apiserver/calico-apiserver-74dfd89d4c-ndtsq" Apr 30 03:29:58.730485 kubelet[2602]: I0430 03:29:58.730423 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hpj5\" (UniqueName: \"kubernetes.io/projected/87f3a9e5-5fac-471b-a36c-1452742abca5-kube-api-access-7hpj5\") pod \"calico-apiserver-74fd85f4d9-5qtrf\" (UID: \"87f3a9e5-5fac-471b-a36c-1452742abca5\") " pod="calico-apiserver/calico-apiserver-74fd85f4d9-5qtrf" Apr 30 03:29:58.951453 containerd[1462]: time="2025-04-30T03:29:58.951201891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4zx2,Uid:b8dfc8e0-f268-4281-b376-50f8468daeb0,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:58.959116 containerd[1462]: time="2025-04-30T03:29:58.959015732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gv4cp,Uid:964497a6-75e1-47e5-836b-3b870a46fee8,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:58.983393 containerd[1462]: time="2025-04-30T03:29:58.983319598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64db747896-slftr,Uid:4490f834-9862-47b3-94e9-1a6cf67f5b80,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:58.995829 containerd[1462]: time="2025-04-30T03:29:58.995468433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fd85f4d9-979dw,Uid:9bbf289e-3bd8-4b3c-9652-ef642934c0ca,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:59.011412 containerd[1462]: time="2025-04-30T03:29:59.011301696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74dfd89d4c-ndtsq,Uid:42a679bb-d883-4cd3-a4bf-74c95efe17a5,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:59.021720 containerd[1462]: time="2025-04-30T03:29:59.021658669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fd85f4d9-5qtrf,Uid:87f3a9e5-5fac-471b-a36c-1452742abca5,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:59.197370 systemd[1]: Created slice kubepods-besteffort-podafe01694_9e56_4cfa_9fa0_0fe8aaed621f.slice - libcontainer container kubepods-besteffort-podafe01694_9e56_4cfa_9fa0_0fe8aaed621f.slice. Apr 30 03:29:59.201653 containerd[1462]: time="2025-04-30T03:29:59.201019912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rbw9s,Uid:afe01694-9e56-4cfa-9fa0-0fe8aaed621f,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:59.368541 containerd[1462]: time="2025-04-30T03:29:59.368202231Z" level=info msg="shim disconnected" id=f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1 namespace=k8s.io Apr 30 03:29:59.368541 containerd[1462]: time="2025-04-30T03:29:59.368274828Z" level=warning msg="cleaning up after shim disconnected" id=f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1 namespace=k8s.io Apr 30 03:29:59.368541 containerd[1462]: time="2025-04-30T03:29:59.368300378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:59.834217 containerd[1462]: time="2025-04-30T03:29:59.834015198Z" level=error msg="Failed to destroy network for sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.841320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491-shm.mount: Deactivated successfully. Apr 30 03:29:59.845226 containerd[1462]: time="2025-04-30T03:29:59.845058947Z" level=error msg="encountered an error cleaning up failed sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.845605 containerd[1462]: time="2025-04-30T03:29:59.845558698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fd85f4d9-5qtrf,Uid:87f3a9e5-5fac-471b-a36c-1452742abca5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.846369 kubelet[2602]: E0430 03:29:59.846284 2602 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.849935 kubelet[2602]: E0430 03:29:59.847000 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74fd85f4d9-5qtrf" Apr 30 03:29:59.849935 kubelet[2602]: E0430 03:29:59.847046 2602 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74fd85f4d9-5qtrf" Apr 30 03:29:59.849935 kubelet[2602]: E0430 03:29:59.847138 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74fd85f4d9-5qtrf_calico-apiserver(87f3a9e5-5fac-471b-a36c-1452742abca5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74fd85f4d9-5qtrf_calico-apiserver(87f3a9e5-5fac-471b-a36c-1452742abca5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74fd85f4d9-5qtrf" podUID="87f3a9e5-5fac-471b-a36c-1452742abca5" Apr 30 03:29:59.854646 containerd[1462]: time="2025-04-30T03:29:59.854167439Z" level=error msg="Failed to destroy network for sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.857037 containerd[1462]: time="2025-04-30T03:29:59.856963340Z" level=error msg="encountered an error cleaning up failed sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.858623 containerd[1462]: time="2025-04-30T03:29:59.858553607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gv4cp,Uid:964497a6-75e1-47e5-836b-3b870a46fee8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.860454 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09-shm.mount: Deactivated successfully. Apr 30 03:29:59.864166 kubelet[2602]: E0430 03:29:59.863796 2602 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.864166 kubelet[2602]: E0430 03:29:59.863976 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gv4cp" Apr 30 03:29:59.864166 kubelet[2602]: E0430 03:29:59.864010 2602 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gv4cp" Apr 30 03:29:59.864694 kubelet[2602]: E0430 03:29:59.864103 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gv4cp_kube-system(964497a6-75e1-47e5-836b-3b870a46fee8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gv4cp_kube-system(964497a6-75e1-47e5-836b-3b870a46fee8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gv4cp" podUID="964497a6-75e1-47e5-836b-3b870a46fee8" Apr 30 03:29:59.891314 containerd[1462]: time="2025-04-30T03:29:59.891129424Z" level=error msg="Failed to destroy network for sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.893941 containerd[1462]: time="2025-04-30T03:29:59.891774577Z" level=error msg="encountered an error cleaning up failed sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.893941 containerd[1462]: time="2025-04-30T03:29:59.891856037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4zx2,Uid:b8dfc8e0-f268-4281-b376-50f8468daeb0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.894161 kubelet[2602]: E0430 03:29:59.892130 2602 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.894161 kubelet[2602]: E0430 03:29:59.892210 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j4zx2" Apr 30 03:29:59.894161 kubelet[2602]: E0430 03:29:59.892243 2602 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j4zx2" Apr 30 03:29:59.894365 kubelet[2602]: E0430 03:29:59.892314 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j4zx2_kube-system(b8dfc8e0-f268-4281-b376-50f8468daeb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j4zx2_kube-system(b8dfc8e0-f268-4281-b376-50f8468daeb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j4zx2" podUID="b8dfc8e0-f268-4281-b376-50f8468daeb0" Apr 30 03:29:59.899352 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e-shm.mount: Deactivated successfully. Apr 30 03:29:59.905125 containerd[1462]: time="2025-04-30T03:29:59.899753544Z" level=error msg="Failed to destroy network for sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.905125 containerd[1462]: time="2025-04-30T03:29:59.901258015Z" level=error msg="encountered an error cleaning up failed sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.905125 containerd[1462]: time="2025-04-30T03:29:59.901340180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rbw9s,Uid:afe01694-9e56-4cfa-9fa0-0fe8aaed621f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.907920 kubelet[2602]: E0430 03:29:59.905506 2602 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.907920 kubelet[2602]: E0430 03:29:59.905591 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rbw9s" Apr 30 03:29:59.907920 kubelet[2602]: E0430 03:29:59.905628 2602 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rbw9s" Apr 30 03:29:59.908199 kubelet[2602]: E0430 03:29:59.905691 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rbw9s_calico-system(afe01694-9e56-4cfa-9fa0-0fe8aaed621f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rbw9s_calico-system(afe01694-9e56-4cfa-9fa0-0fe8aaed621f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rbw9s" podUID="afe01694-9e56-4cfa-9fa0-0fe8aaed621f" Apr 30 03:29:59.908591 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158-shm.mount: Deactivated successfully. Apr 30 03:29:59.913048 containerd[1462]: time="2025-04-30T03:29:59.912981597Z" level=error msg="Failed to destroy network for sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.914296 containerd[1462]: time="2025-04-30T03:29:59.913972318Z" level=error msg="encountered an error cleaning up failed sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.914296 containerd[1462]: time="2025-04-30T03:29:59.914058320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64db747896-slftr,Uid:4490f834-9862-47b3-94e9-1a6cf67f5b80,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.914595 kubelet[2602]: E0430 03:29:59.914420 2602 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.914595 kubelet[2602]: E0430 03:29:59.914495 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64db747896-slftr" Apr 30 03:29:59.914595 kubelet[2602]: E0430 03:29:59.914523 2602 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64db747896-slftr" Apr 30 03:29:59.914775 kubelet[2602]: E0430 03:29:59.914580 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64db747896-slftr_calico-system(4490f834-9862-47b3-94e9-1a6cf67f5b80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64db747896-slftr_calico-system(4490f834-9862-47b3-94e9-1a6cf67f5b80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64db747896-slftr" podUID="4490f834-9862-47b3-94e9-1a6cf67f5b80" Apr 30 03:29:59.920330 containerd[1462]: time="2025-04-30T03:29:59.920274668Z" level=error msg="Failed to destroy network for sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.920736 containerd[1462]: time="2025-04-30T03:29:59.920691517Z" level=error msg="encountered an error cleaning up failed sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.920870 containerd[1462]: time="2025-04-30T03:29:59.920781298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fd85f4d9-979dw,Uid:9bbf289e-3bd8-4b3c-9652-ef642934c0ca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.921609 kubelet[2602]: E0430 03:29:59.921039 2602 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.921609 kubelet[2602]: E0430 03:29:59.921105 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74fd85f4d9-979dw" Apr 30 03:29:59.921609 kubelet[2602]: E0430 03:29:59.921140 2602 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74fd85f4d9-979dw" Apr 30 03:29:59.921807 kubelet[2602]: E0430 03:29:59.921202 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74fd85f4d9-979dw_calico-apiserver(9bbf289e-3bd8-4b3c-9652-ef642934c0ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74fd85f4d9-979dw_calico-apiserver(9bbf289e-3bd8-4b3c-9652-ef642934c0ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74fd85f4d9-979dw" podUID="9bbf289e-3bd8-4b3c-9652-ef642934c0ca" Apr 30 03:29:59.924448 containerd[1462]: time="2025-04-30T03:29:59.924386551Z" level=error msg="Failed to destroy network for sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.924835 containerd[1462]: time="2025-04-30T03:29:59.924795153Z" level=error msg="encountered an error cleaning up failed sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.925067 containerd[1462]: time="2025-04-30T03:29:59.924923655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74dfd89d4c-ndtsq,Uid:42a679bb-d883-4cd3-a4bf-74c95efe17a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.925934 kubelet[2602]: E0430 03:29:59.925698 2602 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:59.925934 kubelet[2602]: E0430 03:29:59.925787 2602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74dfd89d4c-ndtsq" Apr 30 03:29:59.925934 kubelet[2602]: E0430 03:29:59.925831 2602 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74dfd89d4c-ndtsq" Apr 30 03:29:59.926225 kubelet[2602]: E0430 03:29:59.925912 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74dfd89d4c-ndtsq_calico-apiserver(42a679bb-d883-4cd3-a4bf-74c95efe17a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74dfd89d4c-ndtsq_calico-apiserver(42a679bb-d883-4cd3-a4bf-74c95efe17a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74dfd89d4c-ndtsq" podUID="42a679bb-d883-4cd3-a4bf-74c95efe17a5" Apr 30 03:30:00.346138 kubelet[2602]: I0430 03:30:00.346079 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:00.347180 containerd[1462]: time="2025-04-30T03:30:00.347124429Z" level=info msg="StopPodSandbox for \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\"" Apr 30 03:30:00.347775 containerd[1462]: time="2025-04-30T03:30:00.347402851Z" level=info msg="Ensure that sandbox 020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3 in task-service has been cleanup successfully" Apr 30 03:30:00.351098 kubelet[2602]: I0430 03:30:00.350964 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:00.353967 containerd[1462]: time="2025-04-30T03:30:00.353796925Z" level=info msg="StopPodSandbox for \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\"" Apr 30 03:30:00.354600 containerd[1462]: time="2025-04-30T03:30:00.354458166Z" level=info msg="Ensure that sandbox bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491 in task-service has been cleanup successfully" Apr 30 03:30:00.357239 kubelet[2602]: I0430 03:30:00.357004 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:00.362054 containerd[1462]: time="2025-04-30T03:30:00.361167718Z" level=info msg="StopPodSandbox for \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\"" Apr 30 03:30:00.364650 containerd[1462]: time="2025-04-30T03:30:00.363971913Z" level=info msg="Ensure that sandbox a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09 in task-service has been cleanup successfully" Apr 30 03:30:00.366356 kubelet[2602]: I0430 03:30:00.366249 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:00.372296 kubelet[2602]: I0430 03:30:00.372257 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:00.375494 containerd[1462]: time="2025-04-30T03:30:00.372522122Z" level=info msg="StopPodSandbox for \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\"" Apr 30 03:30:00.376049 containerd[1462]: time="2025-04-30T03:30:00.375820770Z" level=info msg="Ensure that sandbox 967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e in task-service has been cleanup successfully" Apr 30 03:30:00.378624 containerd[1462]: time="2025-04-30T03:30:00.378577107Z" level=info msg="StopPodSandbox for \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\"" Apr 30 03:30:00.382680 containerd[1462]: time="2025-04-30T03:30:00.382002008Z" level=info msg="Ensure that sandbox 324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158 in task-service has been cleanup successfully" Apr 30 03:30:00.398151 containerd[1462]: time="2025-04-30T03:30:00.397769259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:30:00.405662 kubelet[2602]: I0430 03:30:00.405621 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:00.408936 containerd[1462]: time="2025-04-30T03:30:00.408237896Z" level=info msg="StopPodSandbox for \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\"" Apr 30 03:30:00.408936 containerd[1462]: time="2025-04-30T03:30:00.408571412Z" level=info msg="Ensure that sandbox 89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960 in task-service has been cleanup successfully" Apr 30 03:30:00.426923 kubelet[2602]: I0430 03:30:00.425822 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:00.435488 containerd[1462]: time="2025-04-30T03:30:00.435428776Z" level=info msg="StopPodSandbox for \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\"" Apr 30 03:30:00.440500 containerd[1462]: time="2025-04-30T03:30:00.440062598Z" level=info msg="Ensure that sandbox 077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e in task-service has been cleanup successfully" Apr 30 03:30:00.552255 containerd[1462]: time="2025-04-30T03:30:00.552179688Z" level=error msg="StopPodSandbox for \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\" failed" error="failed to destroy network for sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:30:00.552878 kubelet[2602]: E0430 03:30:00.552822 2602 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:00.553170 kubelet[2602]: E0430 03:30:00.553033 2602 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3"} Apr 30 03:30:00.553404 kubelet[2602]: E0430 03:30:00.553199 2602 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4490f834-9862-47b3-94e9-1a6cf67f5b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:30:00.553404 kubelet[2602]: E0430 03:30:00.553361 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4490f834-9862-47b3-94e9-1a6cf67f5b80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64db747896-slftr" podUID="4490f834-9862-47b3-94e9-1a6cf67f5b80" Apr 30 03:30:00.573166 containerd[1462]: time="2025-04-30T03:30:00.572994646Z" level=error msg="StopPodSandbox for \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\" failed" error="failed to destroy network for sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:30:00.573926 kubelet[2602]: E0430 03:30:00.573523 2602 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:00.573926 kubelet[2602]: E0430 03:30:00.573595 2602 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09"} Apr 30 03:30:00.573926 kubelet[2602]: E0430 03:30:00.573658 2602 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"964497a6-75e1-47e5-836b-3b870a46fee8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:30:00.573926 kubelet[2602]: E0430 03:30:00.573699 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"964497a6-75e1-47e5-836b-3b870a46fee8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gv4cp" podUID="964497a6-75e1-47e5-836b-3b870a46fee8" Apr 30 03:30:00.582296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960-shm.mount: Deactivated successfully. Apr 30 03:30:00.584126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e-shm.mount: Deactivated successfully. Apr 30 03:30:00.584237 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3-shm.mount: Deactivated successfully. Apr 30 03:30:00.593975 containerd[1462]: time="2025-04-30T03:30:00.593865596Z" level=error msg="StopPodSandbox for \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\" failed" error="failed to destroy network for sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:30:00.594584 kubelet[2602]: E0430 03:30:00.594538 2602 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:00.594843 kubelet[2602]: E0430 03:30:00.594782 2602 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491"} Apr 30 03:30:00.595136 kubelet[2602]: E0430 03:30:00.595032 2602 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87f3a9e5-5fac-471b-a36c-1452742abca5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:30:00.595451 kubelet[2602]: E0430 03:30:00.595413 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87f3a9e5-5fac-471b-a36c-1452742abca5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74fd85f4d9-5qtrf" podUID="87f3a9e5-5fac-471b-a36c-1452742abca5" Apr 30 03:30:00.606318 containerd[1462]: time="2025-04-30T03:30:00.606133796Z" level=error msg="StopPodSandbox for \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\" failed" error="failed to destroy network for sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:30:00.607232 kubelet[2602]: E0430 03:30:00.606985 2602 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:00.607734 kubelet[2602]: E0430 03:30:00.607636 2602 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158"} Apr 30 03:30:00.608988 kubelet[2602]: E0430 03:30:00.607762 2602 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"afe01694-9e56-4cfa-9fa0-0fe8aaed621f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:30:00.608988 kubelet[2602]: E0430 03:30:00.607822 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"afe01694-9e56-4cfa-9fa0-0fe8aaed621f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rbw9s" podUID="afe01694-9e56-4cfa-9fa0-0fe8aaed621f" Apr 30 03:30:00.610685 containerd[1462]: time="2025-04-30T03:30:00.610486455Z" level=error msg="StopPodSandbox for \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\" failed" error="failed to destroy network for sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:30:00.611288 kubelet[2602]: E0430 03:30:00.611037 2602 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:00.611288 kubelet[2602]: E0430 03:30:00.611135 2602 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e"} Apr 30 03:30:00.611288 kubelet[2602]: E0430 03:30:00.611190 2602 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8dfc8e0-f268-4281-b376-50f8468daeb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:30:00.611288 kubelet[2602]: E0430 03:30:00.611240 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8dfc8e0-f268-4281-b376-50f8468daeb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j4zx2" podUID="b8dfc8e0-f268-4281-b376-50f8468daeb0" Apr 30 03:30:00.621546 containerd[1462]: time="2025-04-30T03:30:00.621477672Z" level=error msg="StopPodSandbox for \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\" failed" error="failed to destroy network for sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:30:00.622092 kubelet[2602]: E0430 03:30:00.621775 2602 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:00.622092 kubelet[2602]: E0430 03:30:00.621853 2602 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960"} Apr 30 03:30:00.622092 kubelet[2602]: E0430 03:30:00.621946 2602 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42a679bb-d883-4cd3-a4bf-74c95efe17a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:30:00.622092 kubelet[2602]: E0430 03:30:00.621984 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42a679bb-d883-4cd3-a4bf-74c95efe17a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74dfd89d4c-ndtsq" podUID="42a679bb-d883-4cd3-a4bf-74c95efe17a5" Apr 30 03:30:00.623781 containerd[1462]: time="2025-04-30T03:30:00.623716209Z" level=error msg="StopPodSandbox for \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\" failed" error="failed to destroy network for sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:30:00.624170 kubelet[2602]: E0430 03:30:00.624105 2602 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:00.624170 kubelet[2602]: E0430 03:30:00.624162 2602 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e"} Apr 30 03:30:00.624357 kubelet[2602]: E0430 03:30:00.624209 2602 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9bbf289e-3bd8-4b3c-9652-ef642934c0ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:30:00.624357 kubelet[2602]: E0430 03:30:00.624248 2602 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9bbf289e-3bd8-4b3c-9652-ef642934c0ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74fd85f4d9-979dw" podUID="9bbf289e-3bd8-4b3c-9652-ef642934c0ca" Apr 30 03:30:07.043505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037266422.mount: Deactivated successfully. Apr 30 03:30:07.077117 containerd[1462]: time="2025-04-30T03:30:07.077037184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:07.078635 containerd[1462]: time="2025-04-30T03:30:07.078241715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:30:07.079534 containerd[1462]: time="2025-04-30T03:30:07.079444391Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:07.086971 containerd[1462]: time="2025-04-30T03:30:07.086851936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:07.090919 containerd[1462]: time="2025-04-30T03:30:07.089382993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.691558191s" Apr 30 03:30:07.090919 containerd[1462]: time="2025-04-30T03:30:07.089451792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:30:07.114608 containerd[1462]: time="2025-04-30T03:30:07.114545698Z" level=info msg="CreateContainer within sandbox \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:30:07.148157 containerd[1462]: time="2025-04-30T03:30:07.148101708Z" level=info msg="CreateContainer within sandbox \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9\"" Apr 30 03:30:07.149274 containerd[1462]: time="2025-04-30T03:30:07.149236545Z" level=info msg="StartContainer for \"29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9\"" Apr 30 03:30:07.189135 systemd[1]: Started cri-containerd-29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9.scope - libcontainer container 29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9. Apr 30 03:30:07.246803 containerd[1462]: time="2025-04-30T03:30:07.245108606Z" level=info msg="StartContainer for \"29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9\" returns successfully" Apr 30 03:30:07.363820 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:30:07.364046 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:30:07.499991 kubelet[2602]: I0430 03:30:07.499874 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rnm47" podStartSLOduration=0.996773409 podStartE2EDuration="18.498070883s" podCreationTimestamp="2025-04-30 03:29:49 +0000 UTC" firstStartedPulling="2025-04-30 03:29:49.590316877 +0000 UTC m=+21.564958614" lastFinishedPulling="2025-04-30 03:30:07.091614353 +0000 UTC m=+39.066256088" observedRunningTime="2025-04-30 03:30:07.494261141 +0000 UTC m=+39.468902886" watchObservedRunningTime="2025-04-30 03:30:07.498070883 +0000 UTC m=+39.472712630" Apr 30 03:30:10.603690 kubelet[2602]: I0430 03:30:10.603392 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:11.513003 kernel: bpftool[4006]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:30:11.899582 systemd-networkd[1374]: vxlan.calico: Link UP Apr 30 03:30:11.900691 systemd-networkd[1374]: vxlan.calico: Gained carrier Apr 30 03:30:12.193763 containerd[1462]: time="2025-04-30T03:30:12.193401878Z" level=info msg="StopPodSandbox for \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\"" Apr 30 03:30:12.196079 containerd[1462]: time="2025-04-30T03:30:12.196033271Z" level=info msg="StopPodSandbox for \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\"" Apr 30 03:30:12.198972 containerd[1462]: time="2025-04-30T03:30:12.198145244Z" level=info msg="StopPodSandbox for \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\"" Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.344 [INFO][4116] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.344 [INFO][4116] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" iface="eth0" netns="/var/run/netns/cni-6b69ee74-df35-aa10-744d-67bb68a0ed97" Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.345 [INFO][4116] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" iface="eth0" netns="/var/run/netns/cni-6b69ee74-df35-aa10-744d-67bb68a0ed97" Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.345 [INFO][4116] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" iface="eth0" netns="/var/run/netns/cni-6b69ee74-df35-aa10-744d-67bb68a0ed97" Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.345 [INFO][4116] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.345 [INFO][4116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.443 [INFO][4145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" HandleID="k8s-pod-network.a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.446 [INFO][4145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.446 [INFO][4145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.478 [WARNING][4145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" HandleID="k8s-pod-network.a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.478 [INFO][4145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" HandleID="k8s-pod-network.a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.489 [INFO][4145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:12.505661 containerd[1462]: 2025-04-30 03:30:12.496 [INFO][4116] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:12.516186 containerd[1462]: time="2025-04-30T03:30:12.511000703Z" level=info msg="TearDown network for sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\" successfully" Apr 30 03:30:12.516186 containerd[1462]: time="2025-04-30T03:30:12.513976841Z" level=info msg="StopPodSandbox for \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\" returns successfully" Apr 30 03:30:12.520118 systemd[1]: run-netns-cni\x2d6b69ee74\x2ddf35\x2daa10\x2d744d\x2d67bb68a0ed97.mount: Deactivated successfully. Apr 30 03:30:12.526979 containerd[1462]: time="2025-04-30T03:30:12.526464459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gv4cp,Uid:964497a6-75e1-47e5-836b-3b870a46fee8,Namespace:kube-system,Attempt:1,}" Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.479 [INFO][4115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.480 [INFO][4115] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" iface="eth0" netns="/var/run/netns/cni-f7ac8065-0030-6e6f-1ce6-1e9fe17ab342" Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.481 [INFO][4115] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" iface="eth0" netns="/var/run/netns/cni-f7ac8065-0030-6e6f-1ce6-1e9fe17ab342" Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.481 [INFO][4115] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" iface="eth0" netns="/var/run/netns/cni-f7ac8065-0030-6e6f-1ce6-1e9fe17ab342" Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.481 [INFO][4115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.482 [INFO][4115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.604 [INFO][4161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" HandleID="k8s-pod-network.324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.605 [INFO][4161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.606 [INFO][4161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.617 [WARNING][4161] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" HandleID="k8s-pod-network.324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.617 [INFO][4161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" HandleID="k8s-pod-network.324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.619 [INFO][4161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:12.628403 containerd[1462]: 2025-04-30 03:30:12.625 [INFO][4115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:12.634110 containerd[1462]: time="2025-04-30T03:30:12.631139129Z" level=info msg="TearDown network for sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\" successfully" Apr 30 03:30:12.634110 containerd[1462]: time="2025-04-30T03:30:12.633983485Z" level=info msg="StopPodSandbox for \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\" returns successfully" Apr 30 03:30:12.639167 containerd[1462]: time="2025-04-30T03:30:12.637203499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rbw9s,Uid:afe01694-9e56-4cfa-9fa0-0fe8aaed621f,Namespace:calico-system,Attempt:1,}" Apr 30 03:30:12.637785 systemd[1]: run-netns-cni\x2df7ac8065\x2d0030\x2d6e6f\x2d1ce6\x2d1e9fe17ab342.mount: Deactivated successfully. Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.504 [INFO][4129] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.513 [INFO][4129] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" iface="eth0" netns="/var/run/netns/cni-6c1b292e-9ddd-46df-4b5f-2075841228a7" Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.527 [INFO][4129] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" iface="eth0" netns="/var/run/netns/cni-6c1b292e-9ddd-46df-4b5f-2075841228a7" Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.527 [INFO][4129] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" iface="eth0" netns="/var/run/netns/cni-6c1b292e-9ddd-46df-4b5f-2075841228a7" Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.527 [INFO][4129] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.527 [INFO][4129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.640 [INFO][4167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" HandleID="k8s-pod-network.077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.642 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.642 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.655 [WARNING][4167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" HandleID="k8s-pod-network.077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.656 [INFO][4167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" HandleID="k8s-pod-network.077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.659 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:12.663998 containerd[1462]: 2025-04-30 03:30:12.661 [INFO][4129] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:12.667795 containerd[1462]: time="2025-04-30T03:30:12.664259730Z" level=info msg="TearDown network for sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\" successfully" Apr 30 03:30:12.667795 containerd[1462]: time="2025-04-30T03:30:12.664319546Z" level=info msg="StopPodSandbox for \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\" returns successfully" Apr 30 03:30:12.674755 systemd[1]: run-netns-cni\x2d6c1b292e\x2d9ddd\x2d46df\x2d4b5f\x2d2075841228a7.mount: Deactivated successfully. Apr 30 03:30:12.677921 containerd[1462]: time="2025-04-30T03:30:12.677850980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fd85f4d9-979dw,Uid:9bbf289e-3bd8-4b3c-9652-ef642934c0ca,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:30:12.883044 systemd-networkd[1374]: cali3c10d1c7d9d: Link UP Apr 30 03:30:12.883445 systemd-networkd[1374]: cali3c10d1c7d9d: Gained carrier Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.674 [INFO][4174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0 coredns-7db6d8ff4d- kube-system 964497a6-75e1-47e5-836b-3b870a46fee8 773 0 2025-04-30 03:29:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal coredns-7db6d8ff4d-gv4cp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c10d1c7d9d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gv4cp" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.677 [INFO][4174] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gv4cp" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.777 [INFO][4193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" HandleID="k8s-pod-network.edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.804 [INFO][4193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" HandleID="k8s-pod-network.edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040a6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-gv4cp", "timestamp":"2025-04-30 03:30:12.777017772 +0000 UTC"}, Hostname:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.804 [INFO][4193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.804 [INFO][4193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.804 [INFO][4193] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal' Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.809 [INFO][4193] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.817 [INFO][4193] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.828 [INFO][4193] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.833 [INFO][4193] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.839 [INFO][4193] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.839 [INFO][4193] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.843 [INFO][4193] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598 Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.855 [INFO][4193] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.871 [INFO][4193] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.193/26] block=192.168.18.192/26 handle="k8s-pod-network.edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.871 [INFO][4193] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.193/26] handle="k8s-pod-network.edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.871 [INFO][4193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:12.948403 containerd[1462]: 2025-04-30 03:30:12.871 [INFO][4193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.193/26] IPv6=[] ContainerID="edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" HandleID="k8s-pod-network.edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:12.949966 containerd[1462]: 2025-04-30 03:30:12.875 [INFO][4174] cni-plugin/k8s.go 386: Populated endpoint ContainerID="edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gv4cp" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"964497a6-75e1-47e5-836b-3b870a46fee8", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-gv4cp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c10d1c7d9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:12.949966 containerd[1462]: 2025-04-30 03:30:12.875 [INFO][4174] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.193/32] ContainerID="edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gv4cp" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:12.949966 containerd[1462]: 2025-04-30 03:30:12.876 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c10d1c7d9d ContainerID="edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gv4cp" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:12.949966 containerd[1462]: 2025-04-30 03:30:12.882 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gv4cp" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:12.949966 containerd[1462]: 2025-04-30 03:30:12.884 [INFO][4174] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gv4cp" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"964497a6-75e1-47e5-836b-3b870a46fee8", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598", Pod:"coredns-7db6d8ff4d-gv4cp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c10d1c7d9d", MAC:"ee:20:cc:26:bc:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:12.949966 containerd[1462]: 2025-04-30 03:30:12.930 [INFO][4174] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gv4cp" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:13.011969 containerd[1462]: time="2025-04-30T03:30:13.011014054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:13.012945 containerd[1462]: time="2025-04-30T03:30:13.012496843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:13.012945 containerd[1462]: time="2025-04-30T03:30:13.012576580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:13.013840 containerd[1462]: time="2025-04-30T03:30:13.013403134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:13.021832 systemd-networkd[1374]: cali6622c8055ec: Link UP Apr 30 03:30:13.024678 systemd-networkd[1374]: cali6622c8055ec: Gained carrier Apr 30 03:30:13.060398 systemd[1]: Started cri-containerd-edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598.scope - libcontainer container edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598. Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.766 [INFO][4188] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0 csi-node-driver- calico-system afe01694-9e56-4cfa-9fa0-0fe8aaed621f 774 0 2025-04-30 03:29:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal csi-node-driver-rbw9s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6622c8055ec [] []}} ContainerID="e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" Namespace="calico-system" Pod="csi-node-driver-rbw9s" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.767 [INFO][4188] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" Namespace="calico-system" Pod="csi-node-driver-rbw9s" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.844 [INFO][4220] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" HandleID="k8s-pod-network.e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.862 [INFO][4220] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" HandleID="k8s-pod-network.e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e07b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", "pod":"csi-node-driver-rbw9s", "timestamp":"2025-04-30 03:30:12.844142695 +0000 UTC"}, Hostname:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.863 [INFO][4220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.871 [INFO][4220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.871 [INFO][4220] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal' Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.881 [INFO][4220] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.916 [INFO][4220] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.946 [INFO][4220] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.952 [INFO][4220] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.957 [INFO][4220] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.957 [INFO][4220] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.959 [INFO][4220] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7 Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.969 [INFO][4220] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.982 [INFO][4220] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.194/26] block=192.168.18.192/26 handle="k8s-pod-network.e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.982 [INFO][4220] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.194/26] handle="k8s-pod-network.e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.982 [INFO][4220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:13.070714 containerd[1462]: 2025-04-30 03:30:12.982 [INFO][4220] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.194/26] IPv6=[] ContainerID="e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" HandleID="k8s-pod-network.e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:13.073413 containerd[1462]: 2025-04-30 03:30:12.994 [INFO][4188] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" Namespace="calico-system" Pod="csi-node-driver-rbw9s" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afe01694-9e56-4cfa-9fa0-0fe8aaed621f", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-rbw9s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6622c8055ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:13.073413 containerd[1462]: 2025-04-30 03:30:12.996 [INFO][4188] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.194/32] ContainerID="e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" Namespace="calico-system" Pod="csi-node-driver-rbw9s" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:13.073413 containerd[1462]: 2025-04-30 03:30:12.996 [INFO][4188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6622c8055ec ContainerID="e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" Namespace="calico-system" Pod="csi-node-driver-rbw9s" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:13.073413 containerd[1462]: 2025-04-30 03:30:13.027 [INFO][4188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" Namespace="calico-system" Pod="csi-node-driver-rbw9s" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:13.073413 containerd[1462]: 2025-04-30 03:30:13.033 [INFO][4188] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" Namespace="calico-system" Pod="csi-node-driver-rbw9s" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afe01694-9e56-4cfa-9fa0-0fe8aaed621f", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7", Pod:"csi-node-driver-rbw9s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6622c8055ec", MAC:"ea:ca:fc:f7:ab:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:13.073413 containerd[1462]: 2025-04-30 03:30:13.066 [INFO][4188] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7" Namespace="calico-system" Pod="csi-node-driver-rbw9s" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:13.102277 systemd-networkd[1374]: caliec6a85e2859: Link UP Apr 30 03:30:13.104857 systemd-networkd[1374]: caliec6a85e2859: Gained carrier Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:12.802 [INFO][4202] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0 calico-apiserver-74fd85f4d9- calico-apiserver 9bbf289e-3bd8-4b3c-9652-ef642934c0ca 775 0 2025-04-30 03:29:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74fd85f4d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal calico-apiserver-74fd85f4d9-979dw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliec6a85e2859 [] []}} ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-979dw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:12.805 [INFO][4202] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-979dw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:12.887 [INFO][4225] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:12.936 [INFO][4225] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051f70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", "pod":"calico-apiserver-74fd85f4d9-979dw", "timestamp":"2025-04-30 03:30:12.887125132 +0000 UTC"}, Hostname:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:12.937 [INFO][4225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:12.982 [INFO][4225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:12.982 [INFO][4225] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal' Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:12.990 [INFO][4225] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:13.006 [INFO][4225] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:13.036 [INFO][4225] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:13.046 [INFO][4225] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:13.050 [INFO][4225] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:13.050 [INFO][4225] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:13.054 [INFO][4225] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:13.070 [INFO][4225] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:13.091 [INFO][4225] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.195/26] block=192.168.18.192/26 handle="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:13.091 [INFO][4225] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.195/26] handle="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:13.091 [INFO][4225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:13.144991 containerd[1462]: 2025-04-30 03:30:13.092 [INFO][4225] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.195/26] IPv6=[] ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:13.149833 containerd[1462]: 2025-04-30 03:30:13.097 [INFO][4202] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-979dw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0", GenerateName:"calico-apiserver-74fd85f4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bbf289e-3bd8-4b3c-9652-ef642934c0ca", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fd85f4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-74fd85f4d9-979dw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec6a85e2859", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:13.149833 containerd[1462]: 2025-04-30 03:30:13.097 [INFO][4202] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.195/32] ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-979dw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:13.149833 containerd[1462]: 2025-04-30 03:30:13.097 [INFO][4202] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec6a85e2859 ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-979dw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:13.149833 containerd[1462]: 2025-04-30 03:30:13.108 [INFO][4202] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-979dw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:13.149833 containerd[1462]: 2025-04-30 03:30:13.109 [INFO][4202] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-979dw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0", GenerateName:"calico-apiserver-74fd85f4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bbf289e-3bd8-4b3c-9652-ef642934c0ca", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fd85f4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a", Pod:"calico-apiserver-74fd85f4d9-979dw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec6a85e2859", MAC:"1e:64:5c:62:0b:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:13.149833 containerd[1462]: 2025-04-30 03:30:13.139 [INFO][4202] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-979dw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:13.176555 containerd[1462]: time="2025-04-30T03:30:13.176196003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:13.176555 containerd[1462]: time="2025-04-30T03:30:13.176274749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:13.176555 containerd[1462]: time="2025-04-30T03:30:13.176301293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:13.176555 containerd[1462]: time="2025-04-30T03:30:13.176421160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:13.194038 containerd[1462]: time="2025-04-30T03:30:13.193950626Z" level=info msg="StopPodSandbox for \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\"" Apr 30 03:30:13.262543 containerd[1462]: time="2025-04-30T03:30:13.262032778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gv4cp,Uid:964497a6-75e1-47e5-836b-3b870a46fee8,Namespace:kube-system,Attempt:1,} returns sandbox id \"edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598\"" Apr 30 03:30:13.273633 containerd[1462]: time="2025-04-30T03:30:13.273195520Z" level=info msg="CreateContainer within sandbox \"edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:30:13.274880 containerd[1462]: time="2025-04-30T03:30:13.272593347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:13.274880 containerd[1462]: time="2025-04-30T03:30:13.272706814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:13.274880 containerd[1462]: time="2025-04-30T03:30:13.273578624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:13.274880 containerd[1462]: time="2025-04-30T03:30:13.274287121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:13.277034 systemd[1]: Started cri-containerd-e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7.scope - libcontainer container e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7. Apr 30 03:30:13.328365 systemd[1]: Started cri-containerd-a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a.scope - libcontainer container a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a. Apr 30 03:30:13.331406 containerd[1462]: time="2025-04-30T03:30:13.331356846Z" level=info msg="CreateContainer within sandbox \"edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bc2ef633963198521c1b77fe1bc0845846a798b8a1ee790cd0a4084f7179d29\"" Apr 30 03:30:13.337481 containerd[1462]: time="2025-04-30T03:30:13.335144469Z" level=info msg="StartContainer for \"5bc2ef633963198521c1b77fe1bc0845846a798b8a1ee790cd0a4084f7179d29\"" Apr 30 03:30:13.414829 containerd[1462]: time="2025-04-30T03:30:13.414329473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rbw9s,Uid:afe01694-9e56-4cfa-9fa0-0fe8aaed621f,Namespace:calico-system,Attempt:1,} returns sandbox id \"e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7\"" Apr 30 03:30:13.418188 systemd[1]: Started cri-containerd-5bc2ef633963198521c1b77fe1bc0845846a798b8a1ee790cd0a4084f7179d29.scope - libcontainer container 5bc2ef633963198521c1b77fe1bc0845846a798b8a1ee790cd0a4084f7179d29. Apr 30 03:30:13.423132 containerd[1462]: time="2025-04-30T03:30:13.423083727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:30:13.506624 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Apr 30 03:30:13.555762 containerd[1462]: time="2025-04-30T03:30:13.555266240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fd85f4d9-979dw,Uid:9bbf289e-3bd8-4b3c-9652-ef642934c0ca,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\"" Apr 30 03:30:13.567300 containerd[1462]: time="2025-04-30T03:30:13.567246175Z" level=info msg="StartContainer for \"5bc2ef633963198521c1b77fe1bc0845846a798b8a1ee790cd0a4084f7179d29\" returns successfully" Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.449 [INFO][4364] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.451 [INFO][4364] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" iface="eth0" netns="/var/run/netns/cni-bdae5332-db73-cabd-1a1d-7d05dba22fe2" Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.451 [INFO][4364] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" iface="eth0" netns="/var/run/netns/cni-bdae5332-db73-cabd-1a1d-7d05dba22fe2" Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.453 [INFO][4364] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" iface="eth0" netns="/var/run/netns/cni-bdae5332-db73-cabd-1a1d-7d05dba22fe2" Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.453 [INFO][4364] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.453 [INFO][4364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.560 [INFO][4431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" HandleID="k8s-pod-network.89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.561 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.562 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.579 [WARNING][4431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" HandleID="k8s-pod-network.89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.579 [INFO][4431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" HandleID="k8s-pod-network.89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.581 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:13.586131 containerd[1462]: 2025-04-30 03:30:13.583 [INFO][4364] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:13.589800 containerd[1462]: time="2025-04-30T03:30:13.587297178Z" level=info msg="TearDown network for sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\" successfully" Apr 30 03:30:13.589800 containerd[1462]: time="2025-04-30T03:30:13.587340177Z" level=info msg="StopPodSandbox for \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\" returns successfully" Apr 30 03:30:13.593052 containerd[1462]: time="2025-04-30T03:30:13.590253812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74dfd89d4c-ndtsq,Uid:42a679bb-d883-4cd3-a4bf-74c95efe17a5,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:30:13.594829 systemd[1]: run-netns-cni\x2dbdae5332\x2ddb73\x2dcabd\x2d1a1d\x2d7d05dba22fe2.mount: Deactivated successfully. Apr 30 03:30:13.782084 systemd-networkd[1374]: cali5c5613a24a6: Link UP Apr 30 03:30:13.783989 systemd-networkd[1374]: cali5c5613a24a6: Gained carrier Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.686 [INFO][4457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0 calico-apiserver-74dfd89d4c- calico-apiserver 42a679bb-d883-4cd3-a4bf-74c95efe17a5 790 0 2025-04-30 03:29:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74dfd89d4c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal calico-apiserver-74dfd89d4c-ndtsq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5c5613a24a6 [] []}} ContainerID="d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-ndtsq" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.686 [INFO][4457] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-ndtsq" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.726 [INFO][4472] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" HandleID="k8s-pod-network.d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.737 [INFO][4472] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" HandleID="k8s-pod-network.d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d8510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", "pod":"calico-apiserver-74dfd89d4c-ndtsq", "timestamp":"2025-04-30 03:30:13.726381865 +0000 UTC"}, Hostname:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.737 [INFO][4472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.737 [INFO][4472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.737 [INFO][4472] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal' Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.739 [INFO][4472] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.745 [INFO][4472] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.751 [INFO][4472] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.753 [INFO][4472] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.756 [INFO][4472] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.757 [INFO][4472] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.759 [INFO][4472] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560 Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.765 [INFO][4472] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.774 [INFO][4472] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.196/26] block=192.168.18.192/26 handle="k8s-pod-network.d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.774 [INFO][4472] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.196/26] handle="k8s-pod-network.d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.775 [INFO][4472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:13.815819 containerd[1462]: 2025-04-30 03:30:13.775 [INFO][4472] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.196/26] IPv6=[] ContainerID="d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" HandleID="k8s-pod-network.d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:13.817145 containerd[1462]: 2025-04-30 03:30:13.776 [INFO][4457] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-ndtsq" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0", GenerateName:"calico-apiserver-74dfd89d4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"42a679bb-d883-4cd3-a4bf-74c95efe17a5", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74dfd89d4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-74dfd89d4c-ndtsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c5613a24a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:13.817145 containerd[1462]: 2025-04-30 03:30:13.777 [INFO][4457] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.196/32] ContainerID="d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-ndtsq" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:13.817145 containerd[1462]: 2025-04-30 03:30:13.777 [INFO][4457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c5613a24a6 ContainerID="d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-ndtsq" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:13.817145 containerd[1462]: 2025-04-30 03:30:13.784 [INFO][4457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-ndtsq" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:13.817145 containerd[1462]: 2025-04-30 03:30:13.784 [INFO][4457] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-ndtsq" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0", GenerateName:"calico-apiserver-74dfd89d4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"42a679bb-d883-4cd3-a4bf-74c95efe17a5", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74dfd89d4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560", Pod:"calico-apiserver-74dfd89d4c-ndtsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c5613a24a6", MAC:"fa:62:5d:fa:f8:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:13.817145 containerd[1462]: 2025-04-30 03:30:13.811 [INFO][4457] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-ndtsq" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:13.862076 containerd[1462]: time="2025-04-30T03:30:13.861266984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:13.862076 containerd[1462]: time="2025-04-30T03:30:13.861389306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:13.862076 containerd[1462]: time="2025-04-30T03:30:13.861415649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:13.862076 containerd[1462]: time="2025-04-30T03:30:13.861556506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:13.907205 systemd[1]: Started cri-containerd-d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560.scope - libcontainer container d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560. Apr 30 03:30:13.967384 containerd[1462]: time="2025-04-30T03:30:13.967297926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74dfd89d4c-ndtsq,Uid:42a679bb-d883-4cd3-a4bf-74c95efe17a5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560\"" Apr 30 03:30:14.146212 systemd-networkd[1374]: cali3c10d1c7d9d: Gained IPv6LL Apr 30 03:30:14.191926 containerd[1462]: time="2025-04-30T03:30:14.191683287Z" level=info msg="StopPodSandbox for \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\"" Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.254 [INFO][4544] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.254 [INFO][4544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" iface="eth0" netns="/var/run/netns/cni-827eb79e-8f78-57ac-add0-0a3c527fe01a" Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.256 [INFO][4544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" iface="eth0" netns="/var/run/netns/cni-827eb79e-8f78-57ac-add0-0a3c527fe01a" Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.258 [INFO][4544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" iface="eth0" netns="/var/run/netns/cni-827eb79e-8f78-57ac-add0-0a3c527fe01a" Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.258 [INFO][4544] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.258 [INFO][4544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.296 [INFO][4551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" HandleID="k8s-pod-network.bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.296 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.296 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.309 [WARNING][4551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" HandleID="k8s-pod-network.bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.309 [INFO][4551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" HandleID="k8s-pod-network.bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.311 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:14.316181 containerd[1462]: 2025-04-30 03:30:14.313 [INFO][4544] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:14.316181 containerd[1462]: time="2025-04-30T03:30:14.316049883Z" level=info msg="TearDown network for sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\" successfully" Apr 30 03:30:14.316181 containerd[1462]: time="2025-04-30T03:30:14.316088931Z" level=info msg="StopPodSandbox for \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\" returns successfully" Apr 30 03:30:14.318134 containerd[1462]: time="2025-04-30T03:30:14.316984055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fd85f4d9-5qtrf,Uid:87f3a9e5-5fac-471b-a36c-1452742abca5,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:30:14.523041 systemd[1]: run-netns-cni\x2d827eb79e\x2d8f78\x2d57ac\x2dadd0\x2d0a3c527fe01a.mount: Deactivated successfully. Apr 30 03:30:14.600485 systemd-networkd[1374]: cali80772b383b0: Link UP Apr 30 03:30:14.601621 systemd-networkd[1374]: cali80772b383b0: Gained carrier Apr 30 03:30:14.619475 kubelet[2602]: I0430 03:30:14.619121 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gv4cp" podStartSLOduration=32.61904326 podStartE2EDuration="32.61904326s" podCreationTimestamp="2025-04-30 03:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:30:14.567070929 +0000 UTC m=+46.541712676" watchObservedRunningTime="2025-04-30 03:30:14.61904326 +0000 UTC m=+46.593685028" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.396 [INFO][4557] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0 calico-apiserver-74fd85f4d9- calico-apiserver 87f3a9e5-5fac-471b-a36c-1452742abca5 800 0 2025-04-30 03:29:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74fd85f4d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal calico-apiserver-74fd85f4d9-5qtrf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali80772b383b0 [] []}} ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-5qtrf" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.396 [INFO][4557] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-5qtrf" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.474 [INFO][4574] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.494 [INFO][4574] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a7a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", "pod":"calico-apiserver-74fd85f4d9-5qtrf", "timestamp":"2025-04-30 03:30:14.474400084 +0000 UTC"}, Hostname:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.494 [INFO][4574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.494 [INFO][4574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.494 [INFO][4574] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal' Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.498 [INFO][4574] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.506 [INFO][4574] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.530 [INFO][4574] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.534 [INFO][4574] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.542 [INFO][4574] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.542 [INFO][4574] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.549 [INFO][4574] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37 Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.561 [INFO][4574] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.586 [INFO][4574] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.197/26] block=192.168.18.192/26 handle="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.586 [INFO][4574] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.197/26] handle="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.586 [INFO][4574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:14.637499 containerd[1462]: 2025-04-30 03:30:14.586 [INFO][4574] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.197/26] IPv6=[] ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:14.640637 containerd[1462]: 2025-04-30 03:30:14.593 [INFO][4557] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-5qtrf" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0", GenerateName:"calico-apiserver-74fd85f4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"87f3a9e5-5fac-471b-a36c-1452742abca5", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fd85f4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-74fd85f4d9-5qtrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80772b383b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:14.640637 containerd[1462]: 2025-04-30 03:30:14.595 [INFO][4557] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.197/32] ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-5qtrf" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:14.640637 containerd[1462]: 2025-04-30 03:30:14.595 [INFO][4557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80772b383b0 ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-5qtrf" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:14.640637 containerd[1462]: 2025-04-30 03:30:14.601 [INFO][4557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-5qtrf" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:14.640637 containerd[1462]: 2025-04-30 03:30:14.602 [INFO][4557] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-5qtrf" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0", GenerateName:"calico-apiserver-74fd85f4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"87f3a9e5-5fac-471b-a36c-1452742abca5", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fd85f4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37", Pod:"calico-apiserver-74fd85f4d9-5qtrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80772b383b0", MAC:"4a:fb:03:6e:d9:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:14.640637 containerd[1462]: 2025-04-30 03:30:14.632 [INFO][4557] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Namespace="calico-apiserver" Pod="calico-apiserver-74fd85f4d9-5qtrf" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:14.733594 containerd[1462]: time="2025-04-30T03:30:14.732207604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:14.733594 containerd[1462]: time="2025-04-30T03:30:14.733215932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:14.733594 containerd[1462]: time="2025-04-30T03:30:14.733275512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:14.733594 containerd[1462]: time="2025-04-30T03:30:14.733403255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:14.794396 systemd[1]: Started cri-containerd-bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37.scope - libcontainer container bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37. Apr 30 03:30:14.807936 containerd[1462]: time="2025-04-30T03:30:14.807735319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:14.810051 containerd[1462]: time="2025-04-30T03:30:14.809974071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:30:14.812928 containerd[1462]: time="2025-04-30T03:30:14.811777587Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:14.816100 containerd[1462]: time="2025-04-30T03:30:14.816050055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:14.818259 containerd[1462]: time="2025-04-30T03:30:14.818208339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.395066405s" Apr 30 03:30:14.818395 containerd[1462]: time="2025-04-30T03:30:14.818265068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:30:14.821458 containerd[1462]: time="2025-04-30T03:30:14.821419395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:30:14.825840 containerd[1462]: time="2025-04-30T03:30:14.825780871Z" level=info msg="CreateContainer within sandbox \"e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:30:14.850532 containerd[1462]: time="2025-04-30T03:30:14.850372649Z" level=info msg="CreateContainer within sandbox \"e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3930f5752e0f3ae62d1abc7e12fc20bfc2593e81bd9632e81bf917a713acc1b9\"" Apr 30 03:30:14.852256 containerd[1462]: time="2025-04-30T03:30:14.852206011Z" level=info msg="StartContainer for \"3930f5752e0f3ae62d1abc7e12fc20bfc2593e81bd9632e81bf917a713acc1b9\"" Apr 30 03:30:14.907654 containerd[1462]: time="2025-04-30T03:30:14.907603677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fd85f4d9-5qtrf,Uid:87f3a9e5-5fac-471b-a36c-1452742abca5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\"" Apr 30 03:30:14.928210 systemd[1]: Started cri-containerd-3930f5752e0f3ae62d1abc7e12fc20bfc2593e81bd9632e81bf917a713acc1b9.scope - libcontainer container 3930f5752e0f3ae62d1abc7e12fc20bfc2593e81bd9632e81bf917a713acc1b9. Apr 30 03:30:14.972185 containerd[1462]: time="2025-04-30T03:30:14.972007880Z" level=info msg="StartContainer for \"3930f5752e0f3ae62d1abc7e12fc20bfc2593e81bd9632e81bf917a713acc1b9\" returns successfully" Apr 30 03:30:14.978189 systemd-networkd[1374]: caliec6a85e2859: Gained IPv6LL Apr 30 03:30:15.042414 systemd-networkd[1374]: cali6622c8055ec: Gained IPv6LL Apr 30 03:30:15.190630 containerd[1462]: time="2025-04-30T03:30:15.190413403Z" level=info msg="StopPodSandbox for \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\"" Apr 30 03:30:15.192967 containerd[1462]: time="2025-04-30T03:30:15.190417488Z" level=info msg="StopPodSandbox for \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\"" Apr 30 03:30:15.236128 systemd-networkd[1374]: cali5c5613a24a6: Gained IPv6LL Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.280 [INFO][4702] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.280 [INFO][4702] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" iface="eth0" netns="/var/run/netns/cni-44c1527c-2b3b-c422-a427-934c592a5163" Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.283 [INFO][4702] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" iface="eth0" netns="/var/run/netns/cni-44c1527c-2b3b-c422-a427-934c592a5163" Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.285 [INFO][4702] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" iface="eth0" netns="/var/run/netns/cni-44c1527c-2b3b-c422-a427-934c592a5163" Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.285 [INFO][4702] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.285 [INFO][4702] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.331 [INFO][4718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" HandleID="k8s-pod-network.967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.331 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.331 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.340 [WARNING][4718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" HandleID="k8s-pod-network.967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.341 [INFO][4718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" HandleID="k8s-pod-network.967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.344 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:15.347677 containerd[1462]: 2025-04-30 03:30:15.346 [INFO][4702] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:15.350542 containerd[1462]: time="2025-04-30T03:30:15.347951556Z" level=info msg="TearDown network for sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\" successfully" Apr 30 03:30:15.350542 containerd[1462]: time="2025-04-30T03:30:15.347989737Z" level=info msg="StopPodSandbox for \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\" returns successfully" Apr 30 03:30:15.350542 containerd[1462]: time="2025-04-30T03:30:15.349067520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4zx2,Uid:b8dfc8e0-f268-4281-b376-50f8468daeb0,Namespace:kube-system,Attempt:1,}" Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.280 [INFO][4703] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.282 [INFO][4703] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" iface="eth0" netns="/var/run/netns/cni-4ffb3d02-d763-3ec8-cdf0-8bb812afd49e" Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.282 [INFO][4703] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" iface="eth0" netns="/var/run/netns/cni-4ffb3d02-d763-3ec8-cdf0-8bb812afd49e" Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.284 [INFO][4703] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" iface="eth0" netns="/var/run/netns/cni-4ffb3d02-d763-3ec8-cdf0-8bb812afd49e" Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.284 [INFO][4703] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.284 [INFO][4703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.331 [INFO][4716] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" HandleID="k8s-pod-network.020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.334 [INFO][4716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.344 [INFO][4716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.356 [WARNING][4716] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" HandleID="k8s-pod-network.020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.356 [INFO][4716] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" HandleID="k8s-pod-network.020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.359 [INFO][4716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:15.363971 containerd[1462]: 2025-04-30 03:30:15.361 [INFO][4703] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:15.366073 containerd[1462]: time="2025-04-30T03:30:15.364154356Z" level=info msg="TearDown network for sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\" successfully" Apr 30 03:30:15.366073 containerd[1462]: time="2025-04-30T03:30:15.364190529Z" level=info msg="StopPodSandbox for \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\" returns successfully" Apr 30 03:30:15.366422 containerd[1462]: time="2025-04-30T03:30:15.366092931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64db747896-slftr,Uid:4490f834-9862-47b3-94e9-1a6cf67f5b80,Namespace:calico-system,Attempt:1,}" Apr 30 03:30:15.528768 systemd[1]: run-netns-cni\x2d4ffb3d02\x2dd763\x2d3ec8\x2dcdf0\x2d8bb812afd49e.mount: Deactivated successfully. Apr 30 03:30:15.528942 systemd[1]: run-netns-cni\x2d44c1527c\x2d2b3b\x2dc422\x2da427\x2d934c592a5163.mount: Deactivated successfully. Apr 30 03:30:15.611338 systemd-networkd[1374]: cali2a18bcd96c3: Link UP Apr 30 03:30:15.614496 systemd-networkd[1374]: cali2a18bcd96c3: Gained carrier Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.436 [INFO][4729] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0 coredns-7db6d8ff4d- kube-system b8dfc8e0-f268-4281-b376-50f8468daeb0 820 0 2025-04-30 03:29:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal coredns-7db6d8ff4d-j4zx2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2a18bcd96c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4zx2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.437 [INFO][4729] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4zx2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.537 [INFO][4753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" HandleID="k8s-pod-network.11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.562 [INFO][4753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" HandleID="k8s-pod-network.11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011beb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-j4zx2", "timestamp":"2025-04-30 03:30:15.537392073 +0000 UTC"}, Hostname:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.563 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.563 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.563 [INFO][4753] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal' Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.566 [INFO][4753] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.574 [INFO][4753] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.580 [INFO][4753] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.582 [INFO][4753] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.585 [INFO][4753] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.585 [INFO][4753] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.587 [INFO][4753] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544 Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.591 [INFO][4753] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.599 [INFO][4753] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.198/26] block=192.168.18.192/26 handle="k8s-pod-network.11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.600 [INFO][4753] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.198/26] handle="k8s-pod-network.11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.600 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:15.647876 containerd[1462]: 2025-04-30 03:30:15.600 [INFO][4753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.198/26] IPv6=[] ContainerID="11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" HandleID="k8s-pod-network.11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:15.650697 containerd[1462]: 2025-04-30 03:30:15.602 [INFO][4729] cni-plugin/k8s.go 386: Populated endpoint ContainerID="11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4zx2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b8dfc8e0-f268-4281-b376-50f8468daeb0", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-j4zx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a18bcd96c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:15.650697 containerd[1462]: 2025-04-30 03:30:15.603 [INFO][4729] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.198/32] ContainerID="11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4zx2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:15.650697 containerd[1462]: 2025-04-30 03:30:15.603 [INFO][4729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a18bcd96c3 ContainerID="11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4zx2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:15.650697 containerd[1462]: 2025-04-30 03:30:15.615 [INFO][4729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4zx2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:15.650697 containerd[1462]: 2025-04-30 03:30:15.616 [INFO][4729] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4zx2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b8dfc8e0-f268-4281-b376-50f8468daeb0", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544", Pod:"coredns-7db6d8ff4d-j4zx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a18bcd96c3", MAC:"56:73:6c:74:31:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:15.650697 containerd[1462]: 2025-04-30 03:30:15.643 [INFO][4729] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4zx2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:15.708777 systemd-networkd[1374]: calif965e1a30d6: Link UP Apr 30 03:30:15.711214 systemd-networkd[1374]: calif965e1a30d6: Gained carrier Apr 30 03:30:15.729015 containerd[1462]: time="2025-04-30T03:30:15.728806488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:15.729568 containerd[1462]: time="2025-04-30T03:30:15.729348775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:15.729568 containerd[1462]: time="2025-04-30T03:30:15.729393422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:15.730262 containerd[1462]: time="2025-04-30T03:30:15.730160283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.478 [INFO][4740] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0 calico-kube-controllers-64db747896- calico-system 4490f834-9862-47b3-94e9-1a6cf67f5b80 821 0 2025-04-30 03:29:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64db747896 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal calico-kube-controllers-64db747896-slftr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif965e1a30d6 [] []}} ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Namespace="calico-system" Pod="calico-kube-controllers-64db747896-slftr" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.478 [INFO][4740] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Namespace="calico-system" Pod="calico-kube-controllers-64db747896-slftr" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.564 [INFO][4760] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.576 [INFO][4760] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031bf50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", "pod":"calico-kube-controllers-64db747896-slftr", "timestamp":"2025-04-30 03:30:15.564388018 +0000 UTC"}, Hostname:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.576 [INFO][4760] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.600 [INFO][4760] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.600 [INFO][4760] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal' Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.603 [INFO][4760] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.616 [INFO][4760] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.630 [INFO][4760] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.637 [INFO][4760] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.642 [INFO][4760] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.642 [INFO][4760] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.653 [INFO][4760] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.670 [INFO][4760] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.691 [INFO][4760] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.199/26] block=192.168.18.192/26 handle="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.692 [INFO][4760] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.199/26] handle="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.692 [INFO][4760] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:15.756566 containerd[1462]: 2025-04-30 03:30:15.692 [INFO][4760] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.199/26] IPv6=[] ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:15.757783 containerd[1462]: 2025-04-30 03:30:15.698 [INFO][4740] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Namespace="calico-system" Pod="calico-kube-controllers-64db747896-slftr" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0", GenerateName:"calico-kube-controllers-64db747896-", Namespace:"calico-system", SelfLink:"", UID:"4490f834-9862-47b3-94e9-1a6cf67f5b80", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64db747896", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-64db747896-slftr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif965e1a30d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:15.757783 containerd[1462]: 2025-04-30 03:30:15.698 [INFO][4740] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.199/32] ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Namespace="calico-system" Pod="calico-kube-controllers-64db747896-slftr" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:15.757783 containerd[1462]: 2025-04-30 03:30:15.698 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif965e1a30d6 ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Namespace="calico-system" Pod="calico-kube-controllers-64db747896-slftr" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:15.757783 containerd[1462]: 2025-04-30 03:30:15.709 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Namespace="calico-system" Pod="calico-kube-controllers-64db747896-slftr" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:15.757783 containerd[1462]: 2025-04-30 03:30:15.712 [INFO][4740] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Namespace="calico-system" Pod="calico-kube-controllers-64db747896-slftr" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0", GenerateName:"calico-kube-controllers-64db747896-", Namespace:"calico-system", SelfLink:"", UID:"4490f834-9862-47b3-94e9-1a6cf67f5b80", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64db747896", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c", Pod:"calico-kube-controllers-64db747896-slftr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif965e1a30d6", MAC:"56:44:72:a8:a7:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:15.757783 containerd[1462]: 2025-04-30 03:30:15.738 [INFO][4740] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Namespace="calico-system" Pod="calico-kube-controllers-64db747896-slftr" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:15.813206 systemd[1]: Started cri-containerd-11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544.scope - libcontainer container 11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544. Apr 30 03:30:15.865967 containerd[1462]: time="2025-04-30T03:30:15.864039730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:15.865967 containerd[1462]: time="2025-04-30T03:30:15.865167773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:15.865967 containerd[1462]: time="2025-04-30T03:30:15.865211277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:15.865967 containerd[1462]: time="2025-04-30T03:30:15.865383337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:15.942157 systemd[1]: Started cri-containerd-9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c.scope - libcontainer container 9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c. Apr 30 03:30:15.971190 containerd[1462]: time="2025-04-30T03:30:15.971133863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4zx2,Uid:b8dfc8e0-f268-4281-b376-50f8468daeb0,Namespace:kube-system,Attempt:1,} returns sandbox id \"11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544\"" Apr 30 03:30:15.980793 containerd[1462]: time="2025-04-30T03:30:15.980713191Z" level=info msg="CreateContainer within sandbox \"11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:30:16.013596 containerd[1462]: time="2025-04-30T03:30:16.013256154Z" level=info msg="CreateContainer within sandbox \"11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f5f9fd831b08afd5d61969d55208d0e59bbe6b6c732a2ea741fe3ae8d43470ae\"" Apr 30 03:30:16.018681 containerd[1462]: time="2025-04-30T03:30:16.017023955Z" level=info msg="StartContainer for \"f5f9fd831b08afd5d61969d55208d0e59bbe6b6c732a2ea741fe3ae8d43470ae\"" Apr 30 03:30:16.049837 containerd[1462]: time="2025-04-30T03:30:16.049778418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64db747896-slftr,Uid:4490f834-9862-47b3-94e9-1a6cf67f5b80,Namespace:calico-system,Attempt:1,} returns sandbox id \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\"" Apr 30 03:30:16.082204 systemd[1]: Started cri-containerd-f5f9fd831b08afd5d61969d55208d0e59bbe6b6c732a2ea741fe3ae8d43470ae.scope - libcontainer container f5f9fd831b08afd5d61969d55208d0e59bbe6b6c732a2ea741fe3ae8d43470ae. Apr 30 03:30:16.144936 containerd[1462]: time="2025-04-30T03:30:16.144857496Z" level=info msg="StartContainer for \"f5f9fd831b08afd5d61969d55208d0e59bbe6b6c732a2ea741fe3ae8d43470ae\" returns successfully" Apr 30 03:30:16.514607 systemd-networkd[1374]: cali80772b383b0: Gained IPv6LL Apr 30 03:30:16.615413 kubelet[2602]: I0430 03:30:16.615323 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j4zx2" podStartSLOduration=34.615295106 podStartE2EDuration="34.615295106s" podCreationTimestamp="2025-04-30 03:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:30:16.589283992 +0000 UTC m=+48.563925740" watchObservedRunningTime="2025-04-30 03:30:16.615295106 +0000 UTC m=+48.589936854" Apr 30 03:30:16.835004 systemd-networkd[1374]: cali2a18bcd96c3: Gained IPv6LL Apr 30 03:30:16.838266 systemd-networkd[1374]: calif965e1a30d6: Gained IPv6LL Apr 30 03:30:17.518051 containerd[1462]: time="2025-04-30T03:30:17.517988300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:17.522132 containerd[1462]: time="2025-04-30T03:30:17.521766206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:30:17.526462 containerd[1462]: time="2025-04-30T03:30:17.526399993Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:17.531849 containerd[1462]: time="2025-04-30T03:30:17.531781106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:17.534587 containerd[1462]: time="2025-04-30T03:30:17.534368215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.712893489s" Apr 30 03:30:17.534587 containerd[1462]: time="2025-04-30T03:30:17.534444242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:30:17.537351 containerd[1462]: time="2025-04-30T03:30:17.537302313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:30:17.543377 containerd[1462]: time="2025-04-30T03:30:17.543319648Z" level=info msg="CreateContainer within sandbox \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:30:17.568789 containerd[1462]: time="2025-04-30T03:30:17.568718425Z" level=info msg="CreateContainer within sandbox \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\"" Apr 30 03:30:17.569880 containerd[1462]: time="2025-04-30T03:30:17.569837764Z" level=info msg="StartContainer for \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\"" Apr 30 03:30:17.635203 systemd[1]: Started cri-containerd-b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679.scope - libcontainer container b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679. Apr 30 03:30:17.701303 containerd[1462]: time="2025-04-30T03:30:17.701241591Z" level=info msg="StartContainer for \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\" returns successfully" Apr 30 03:30:17.774015 containerd[1462]: time="2025-04-30T03:30:17.771938519Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:17.775857 containerd[1462]: time="2025-04-30T03:30:17.775794479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:30:17.779510 containerd[1462]: time="2025-04-30T03:30:17.779398033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 242.044839ms" Apr 30 03:30:17.779722 containerd[1462]: time="2025-04-30T03:30:17.779698755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:30:17.782979 containerd[1462]: time="2025-04-30T03:30:17.782049882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:30:17.785909 containerd[1462]: time="2025-04-30T03:30:17.785809417Z" level=info msg="CreateContainer within sandbox \"d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:30:17.808315 containerd[1462]: time="2025-04-30T03:30:17.808259216Z" level=info msg="CreateContainer within sandbox \"d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"472ca11c1d5f653e3d7d62ea815f97d9e0ab1a22f3b091640c708eebcb778c08\"" Apr 30 03:30:17.811624 containerd[1462]: time="2025-04-30T03:30:17.811552143Z" level=info msg="StartContainer for \"472ca11c1d5f653e3d7d62ea815f97d9e0ab1a22f3b091640c708eebcb778c08\"" Apr 30 03:30:17.864048 systemd[1]: Started cri-containerd-472ca11c1d5f653e3d7d62ea815f97d9e0ab1a22f3b091640c708eebcb778c08.scope - libcontainer container 472ca11c1d5f653e3d7d62ea815f97d9e0ab1a22f3b091640c708eebcb778c08. Apr 30 03:30:17.958837 containerd[1462]: time="2025-04-30T03:30:17.958768320Z" level=info msg="StartContainer for \"472ca11c1d5f653e3d7d62ea815f97d9e0ab1a22f3b091640c708eebcb778c08\" returns successfully" Apr 30 03:30:18.009792 containerd[1462]: time="2025-04-30T03:30:18.009726471Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:18.013364 containerd[1462]: time="2025-04-30T03:30:18.013162740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:30:18.017588 containerd[1462]: time="2025-04-30T03:30:18.017117527Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 235.017638ms" Apr 30 03:30:18.017588 containerd[1462]: time="2025-04-30T03:30:18.017197548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:30:18.022648 containerd[1462]: time="2025-04-30T03:30:18.021328745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:30:18.027106 containerd[1462]: time="2025-04-30T03:30:18.025692583Z" level=info msg="CreateContainer within sandbox \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:30:18.045359 containerd[1462]: time="2025-04-30T03:30:18.045297647Z" level=info msg="CreateContainer within sandbox \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\"" Apr 30 03:30:18.046972 containerd[1462]: time="2025-04-30T03:30:18.046424574Z" level=info msg="StartContainer for \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\"" Apr 30 03:30:18.103105 systemd[1]: Started cri-containerd-266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102.scope - libcontainer container 266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102. Apr 30 03:30:18.255083 containerd[1462]: time="2025-04-30T03:30:18.254959905Z" level=info msg="StartContainer for \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\" returns successfully" Apr 30 03:30:18.652805 kubelet[2602]: I0430 03:30:18.652720 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74dfd89d4c-ndtsq" podStartSLOduration=25.841878653 podStartE2EDuration="29.652689126s" podCreationTimestamp="2025-04-30 03:29:49 +0000 UTC" firstStartedPulling="2025-04-30 03:30:13.970051939 +0000 UTC m=+45.944693671" lastFinishedPulling="2025-04-30 03:30:17.780862416 +0000 UTC m=+49.755504144" observedRunningTime="2025-04-30 03:30:18.621137841 +0000 UTC m=+50.595779588" watchObservedRunningTime="2025-04-30 03:30:18.652689126 +0000 UTC m=+50.627330876" Apr 30 03:30:18.684512 kubelet[2602]: I0430 03:30:18.683110 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74fd85f4d9-979dw" podStartSLOduration=26.707073894 podStartE2EDuration="30.68307735s" podCreationTimestamp="2025-04-30 03:29:48 +0000 UTC" firstStartedPulling="2025-04-30 03:30:13.560066872 +0000 UTC m=+45.534708615" lastFinishedPulling="2025-04-30 03:30:17.536070327 +0000 UTC m=+49.510712071" observedRunningTime="2025-04-30 03:30:18.654467203 +0000 UTC m=+50.629108927" watchObservedRunningTime="2025-04-30 03:30:18.68307735 +0000 UTC m=+50.657719098" Apr 30 03:30:19.553943 ntpd[1431]: Listen normally on 7 vxlan.calico 192.168.18.192:123 Apr 30 03:30:19.554086 ntpd[1431]: Listen normally on 8 vxlan.calico [fe80::6487:e5ff:fe6f:42b0%4]:123 Apr 30 03:30:19.554581 ntpd[1431]: 30 Apr 03:30:19 ntpd[1431]: Listen normally on 7 vxlan.calico 192.168.18.192:123 Apr 30 03:30:19.554581 ntpd[1431]: 30 Apr 03:30:19 ntpd[1431]: Listen normally on 8 vxlan.calico [fe80::6487:e5ff:fe6f:42b0%4]:123 Apr 30 03:30:19.554581 ntpd[1431]: 30 Apr 03:30:19 ntpd[1431]: Listen normally on 9 cali3c10d1c7d9d [fe80::ecee:eeff:feee:eeee%7]:123 Apr 30 03:30:19.554581 ntpd[1431]: 30 Apr 03:30:19 ntpd[1431]: Listen normally on 10 cali6622c8055ec [fe80::ecee:eeff:feee:eeee%8]:123 Apr 30 03:30:19.554581 ntpd[1431]: 30 Apr 03:30:19 ntpd[1431]: Listen normally on 11 caliec6a85e2859 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 30 03:30:19.554581 ntpd[1431]: 30 Apr 03:30:19 ntpd[1431]: Listen normally on 12 cali5c5613a24a6 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 30 03:30:19.554581 ntpd[1431]: 30 Apr 03:30:19 ntpd[1431]: Listen normally on 13 cali80772b383b0 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 30 03:30:19.554581 ntpd[1431]: 30 Apr 03:30:19 ntpd[1431]: Listen normally on 14 cali2a18bcd96c3 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 30 03:30:19.554581 ntpd[1431]: 30 Apr 03:30:19 ntpd[1431]: Listen normally on 15 calif965e1a30d6 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 30 03:30:19.554179 ntpd[1431]: Listen normally on 9 cali3c10d1c7d9d [fe80::ecee:eeff:feee:eeee%7]:123 Apr 30 03:30:19.554242 ntpd[1431]: Listen normally on 10 cali6622c8055ec [fe80::ecee:eeff:feee:eeee%8]:123 Apr 30 03:30:19.554317 ntpd[1431]: Listen normally on 11 caliec6a85e2859 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 30 03:30:19.554372 ntpd[1431]: Listen normally on 12 cali5c5613a24a6 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 30 03:30:19.554425 ntpd[1431]: Listen normally on 13 cali80772b383b0 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 30 03:30:19.554477 ntpd[1431]: Listen normally on 14 cali2a18bcd96c3 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 30 03:30:19.554529 ntpd[1431]: Listen normally on 15 calif965e1a30d6 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 30 03:30:19.613744 kubelet[2602]: I0430 03:30:19.613703 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:19.615876 kubelet[2602]: I0430 03:30:19.615831 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:19.627603 containerd[1462]: time="2025-04-30T03:30:19.627532156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:19.629352 containerd[1462]: time="2025-04-30T03:30:19.629234580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:30:19.631619 containerd[1462]: time="2025-04-30T03:30:19.631571394Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:19.635129 containerd[1462]: time="2025-04-30T03:30:19.635076901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:19.637431 containerd[1462]: time="2025-04-30T03:30:19.637362285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.615977692s" Apr 30 03:30:19.637606 containerd[1462]: time="2025-04-30T03:30:19.637422075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:30:19.639369 containerd[1462]: time="2025-04-30T03:30:19.639322658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:30:19.641928 containerd[1462]: time="2025-04-30T03:30:19.641848144Z" level=info msg="CreateContainer within sandbox \"e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:30:19.678403 containerd[1462]: time="2025-04-30T03:30:19.676079768Z" level=info msg="CreateContainer within sandbox \"e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"56d2cdf2d2e170163fd6daea560a9afbcec89d186eec5100db07edd2a68cec94\"" Apr 30 03:30:19.678403 containerd[1462]: time="2025-04-30T03:30:19.677347129Z" level=info msg="StartContainer for \"56d2cdf2d2e170163fd6daea560a9afbcec89d186eec5100db07edd2a68cec94\"" Apr 30 03:30:19.789945 systemd[1]: run-containerd-runc-k8s.io-56d2cdf2d2e170163fd6daea560a9afbcec89d186eec5100db07edd2a68cec94-runc.ow545J.mount: Deactivated successfully. Apr 30 03:30:19.802173 systemd[1]: Started cri-containerd-56d2cdf2d2e170163fd6daea560a9afbcec89d186eec5100db07edd2a68cec94.scope - libcontainer container 56d2cdf2d2e170163fd6daea560a9afbcec89d186eec5100db07edd2a68cec94. Apr 30 03:30:19.946540 containerd[1462]: time="2025-04-30T03:30:19.945412782Z" level=info msg="StartContainer for \"56d2cdf2d2e170163fd6daea560a9afbcec89d186eec5100db07edd2a68cec94\" returns successfully" Apr 30 03:30:20.350415 kubelet[2602]: I0430 03:30:20.350355 2602 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:30:20.350415 kubelet[2602]: I0430 03:30:20.350406 2602 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:30:20.639055 kubelet[2602]: I0430 03:30:20.636968 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:20.664696 kubelet[2602]: I0430 03:30:20.664614 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74fd85f4d9-5qtrf" podStartSLOduration=29.553778579 podStartE2EDuration="32.664585354s" podCreationTimestamp="2025-04-30 03:29:48 +0000 UTC" firstStartedPulling="2025-04-30 03:30:14.90989614 +0000 UTC m=+46.884537878" lastFinishedPulling="2025-04-30 03:30:18.020702905 +0000 UTC m=+49.995344653" observedRunningTime="2025-04-30 03:30:18.687425737 +0000 UTC m=+50.662067487" watchObservedRunningTime="2025-04-30 03:30:20.664585354 +0000 UTC m=+52.639227103" Apr 30 03:30:20.667021 kubelet[2602]: I0430 03:30:20.666951 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rbw9s" podStartSLOduration=25.449477829 podStartE2EDuration="31.666929169s" podCreationTimestamp="2025-04-30 03:29:49 +0000 UTC" firstStartedPulling="2025-04-30 03:30:13.421290261 +0000 UTC m=+45.395931987" lastFinishedPulling="2025-04-30 03:30:19.638741593 +0000 UTC m=+51.613383327" observedRunningTime="2025-04-30 03:30:20.66262589 +0000 UTC m=+52.637267648" watchObservedRunningTime="2025-04-30 03:30:20.666929169 +0000 UTC m=+52.641570913" Apr 30 03:30:22.367224 containerd[1462]: time="2025-04-30T03:30:22.367172512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:22.368904 containerd[1462]: time="2025-04-30T03:30:22.368822014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:30:22.370439 containerd[1462]: time="2025-04-30T03:30:22.370238850Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:22.374608 containerd[1462]: time="2025-04-30T03:30:22.374554951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:22.376723 containerd[1462]: time="2025-04-30T03:30:22.375558355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.736184067s" Apr 30 03:30:22.376723 containerd[1462]: time="2025-04-30T03:30:22.376155714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:30:22.399570 containerd[1462]: time="2025-04-30T03:30:22.399320782Z" level=info msg="CreateContainer within sandbox \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:30:22.421131 containerd[1462]: time="2025-04-30T03:30:22.421078231Z" level=info msg="CreateContainer within sandbox \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49\"" Apr 30 03:30:22.424252 containerd[1462]: time="2025-04-30T03:30:22.424193315Z" level=info msg="StartContainer for \"8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49\"" Apr 30 03:30:22.481157 systemd[1]: Started cri-containerd-8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49.scope - libcontainer container 8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49. Apr 30 03:30:22.556242 containerd[1462]: time="2025-04-30T03:30:22.555328606Z" level=info msg="StartContainer for \"8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49\" returns successfully" Apr 30 03:30:22.679553 kubelet[2602]: I0430 03:30:22.678744 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64db747896-slftr" podStartSLOduration=27.35518071 podStartE2EDuration="33.678714184s" podCreationTimestamp="2025-04-30 03:29:49 +0000 UTC" firstStartedPulling="2025-04-30 03:30:16.054222757 +0000 UTC m=+48.028864493" lastFinishedPulling="2025-04-30 03:30:22.377755998 +0000 UTC m=+54.352397967" observedRunningTime="2025-04-30 03:30:22.676057196 +0000 UTC m=+54.650698941" watchObservedRunningTime="2025-04-30 03:30:22.678714184 +0000 UTC m=+54.653355932" Apr 30 03:30:28.207820 containerd[1462]: time="2025-04-30T03:30:28.207769860Z" level=info msg="StopPodSandbox for \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\"" Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.258 [WARNING][5184] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afe01694-9e56-4cfa-9fa0-0fe8aaed621f", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7", Pod:"csi-node-driver-rbw9s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6622c8055ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.259 [INFO][5184] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.259 [INFO][5184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" iface="eth0" netns="" Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.259 [INFO][5184] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.259 [INFO][5184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.292 [INFO][5193] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" HandleID="k8s-pod-network.324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.292 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.292 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.302 [WARNING][5193] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" HandleID="k8s-pod-network.324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.302 [INFO][5193] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" HandleID="k8s-pod-network.324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.304 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:28.308932 containerd[1462]: 2025-04-30 03:30:28.306 [INFO][5184] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:28.308932 containerd[1462]: time="2025-04-30T03:30:28.308149966Z" level=info msg="TearDown network for sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\" successfully" Apr 30 03:30:28.308932 containerd[1462]: time="2025-04-30T03:30:28.308187436Z" level=info msg="StopPodSandbox for \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\" returns successfully" Apr 30 03:30:28.309837 containerd[1462]: time="2025-04-30T03:30:28.309042039Z" level=info msg="RemovePodSandbox for \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\"" Apr 30 03:30:28.309837 containerd[1462]: time="2025-04-30T03:30:28.309085502Z" level=info msg="Forcibly stopping sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\"" Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.376 [WARNING][5212] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afe01694-9e56-4cfa-9fa0-0fe8aaed621f", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"e20a3e16f7d5c015025977433f73a5c99547b9f0b6d9d9e672838930c73cd9d7", Pod:"csi-node-driver-rbw9s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6622c8055ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.376 [INFO][5212] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.376 [INFO][5212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" iface="eth0" netns="" Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.376 [INFO][5212] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.376 [INFO][5212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.406 [INFO][5219] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" HandleID="k8s-pod-network.324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.406 [INFO][5219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.406 [INFO][5219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.414 [WARNING][5219] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" HandleID="k8s-pod-network.324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.414 [INFO][5219] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" HandleID="k8s-pod-network.324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-csi--node--driver--rbw9s-eth0" Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.416 [INFO][5219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:28.419526 containerd[1462]: 2025-04-30 03:30:28.418 [INFO][5212] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158" Apr 30 03:30:28.421072 containerd[1462]: time="2025-04-30T03:30:28.419579191Z" level=info msg="TearDown network for sandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\" successfully" Apr 30 03:30:28.426791 containerd[1462]: time="2025-04-30T03:30:28.426686689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:28.427000 containerd[1462]: time="2025-04-30T03:30:28.426802916Z" level=info msg="RemovePodSandbox \"324d8ab2f378cd34bf05128a969d2c4396d292e910eea6c95171b8588c0d2158\" returns successfully" Apr 30 03:30:28.427798 containerd[1462]: time="2025-04-30T03:30:28.427702212Z" level=info msg="StopPodSandbox for \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\"" Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.478 [WARNING][5237] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0", GenerateName:"calico-apiserver-74fd85f4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bbf289e-3bd8-4b3c-9652-ef642934c0ca", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fd85f4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a", Pod:"calico-apiserver-74fd85f4d9-979dw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec6a85e2859", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.479 [INFO][5237] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.479 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" iface="eth0" netns="" Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.479 [INFO][5237] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.479 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.507 [INFO][5245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" HandleID="k8s-pod-network.077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.507 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.507 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.517 [WARNING][5245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" HandleID="k8s-pod-network.077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.517 [INFO][5245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" HandleID="k8s-pod-network.077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.519 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:28.523738 containerd[1462]: 2025-04-30 03:30:28.521 [INFO][5237] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:28.524712 containerd[1462]: time="2025-04-30T03:30:28.523829550Z" level=info msg="TearDown network for sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\" successfully" Apr 30 03:30:28.524712 containerd[1462]: time="2025-04-30T03:30:28.523869734Z" level=info msg="StopPodSandbox for \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\" returns successfully" Apr 30 03:30:28.525220 containerd[1462]: time="2025-04-30T03:30:28.524787689Z" level=info msg="RemovePodSandbox for \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\"" Apr 30 03:30:28.525220 containerd[1462]: time="2025-04-30T03:30:28.524840011Z" level=info msg="Forcibly stopping sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\"" Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.606 [WARNING][5263] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0", GenerateName:"calico-apiserver-74fd85f4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bbf289e-3bd8-4b3c-9652-ef642934c0ca", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fd85f4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a", Pod:"calico-apiserver-74fd85f4d9-979dw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec6a85e2859", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.606 [INFO][5263] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.606 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" iface="eth0" netns="" Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.606 [INFO][5263] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.606 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.640 [INFO][5271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" HandleID="k8s-pod-network.077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.640 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.640 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.647 [WARNING][5271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" HandleID="k8s-pod-network.077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.647 [INFO][5271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" HandleID="k8s-pod-network.077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.649 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:28.652625 containerd[1462]: 2025-04-30 03:30:28.651 [INFO][5263] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e" Apr 30 03:30:28.653624 containerd[1462]: time="2025-04-30T03:30:28.652733118Z" level=info msg="TearDown network for sandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\" successfully" Apr 30 03:30:28.659019 containerd[1462]: time="2025-04-30T03:30:28.658924778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:28.659346 containerd[1462]: time="2025-04-30T03:30:28.659031512Z" level=info msg="RemovePodSandbox \"077841ab1726b6fec4f1dce2e021f1b1776eacd523f732517ba113055de7503e\" returns successfully" Apr 30 03:30:28.659823 containerd[1462]: time="2025-04-30T03:30:28.659757001Z" level=info msg="StopPodSandbox for \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\"" Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.712 [WARNING][5289] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0", GenerateName:"calico-apiserver-74fd85f4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"87f3a9e5-5fac-471b-a36c-1452742abca5", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fd85f4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37", Pod:"calico-apiserver-74fd85f4d9-5qtrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80772b383b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.712 [INFO][5289] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.712 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" iface="eth0" netns="" Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.712 [INFO][5289] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.712 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.742 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" HandleID="k8s-pod-network.bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.742 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.742 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.749 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" HandleID="k8s-pod-network.bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.749 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" HandleID="k8s-pod-network.bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.751 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:28.754922 containerd[1462]: 2025-04-30 03:30:28.753 [INFO][5289] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:28.756323 containerd[1462]: time="2025-04-30T03:30:28.754950043Z" level=info msg="TearDown network for sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\" successfully" Apr 30 03:30:28.756323 containerd[1462]: time="2025-04-30T03:30:28.754989928Z" level=info msg="StopPodSandbox for \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\" returns successfully" Apr 30 03:30:28.756323 containerd[1462]: time="2025-04-30T03:30:28.755664028Z" level=info msg="RemovePodSandbox for \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\"" Apr 30 03:30:28.756323 containerd[1462]: time="2025-04-30T03:30:28.755972760Z" level=info msg="Forcibly stopping sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\"" Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.804 [WARNING][5314] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0", GenerateName:"calico-apiserver-74fd85f4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"87f3a9e5-5fac-471b-a36c-1452742abca5", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fd85f4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37", Pod:"calico-apiserver-74fd85f4d9-5qtrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80772b383b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.805 [INFO][5314] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.805 [INFO][5314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" iface="eth0" netns="" Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.805 [INFO][5314] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.805 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.834 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" HandleID="k8s-pod-network.bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.834 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.834 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.841 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" HandleID="k8s-pod-network.bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.841 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" HandleID="k8s-pod-network.bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.843 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:28.846282 containerd[1462]: 2025-04-30 03:30:28.844 [INFO][5314] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491" Apr 30 03:30:28.846282 containerd[1462]: time="2025-04-30T03:30:28.846152070Z" level=info msg="TearDown network for sandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\" successfully" Apr 30 03:30:28.851241 containerd[1462]: time="2025-04-30T03:30:28.851182378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:28.851241 containerd[1462]: time="2025-04-30T03:30:28.851287803Z" level=info msg="RemovePodSandbox \"bdb20797a34ee3fe5a5908059362bdb9db4f7031fd4a387fb6758c1abc5a0491\" returns successfully" Apr 30 03:30:28.852438 containerd[1462]: time="2025-04-30T03:30:28.851916010Z" level=info msg="StopPodSandbox for \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\"" Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.902 [WARNING][5339] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0", GenerateName:"calico-kube-controllers-64db747896-", Namespace:"calico-system", SelfLink:"", UID:"4490f834-9862-47b3-94e9-1a6cf67f5b80", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64db747896", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c", Pod:"calico-kube-controllers-64db747896-slftr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif965e1a30d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.903 [INFO][5339] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.903 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" iface="eth0" netns="" Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.903 [INFO][5339] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.903 [INFO][5339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.934 [INFO][5346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" HandleID="k8s-pod-network.020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.934 [INFO][5346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.934 [INFO][5346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.942 [WARNING][5346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" HandleID="k8s-pod-network.020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.942 [INFO][5346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" HandleID="k8s-pod-network.020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.944 [INFO][5346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:28.947122 containerd[1462]: 2025-04-30 03:30:28.945 [INFO][5339] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:28.948050 containerd[1462]: time="2025-04-30T03:30:28.947189738Z" level=info msg="TearDown network for sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\" successfully" Apr 30 03:30:28.948050 containerd[1462]: time="2025-04-30T03:30:28.947224808Z" level=info msg="StopPodSandbox for \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\" returns successfully" Apr 30 03:30:28.948669 containerd[1462]: time="2025-04-30T03:30:28.948618853Z" level=info msg="RemovePodSandbox for \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\"" Apr 30 03:30:28.948669 containerd[1462]: time="2025-04-30T03:30:28.948666266Z" level=info msg="Forcibly stopping sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\"" Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.016 [WARNING][5364] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0", GenerateName:"calico-kube-controllers-64db747896-", Namespace:"calico-system", SelfLink:"", UID:"4490f834-9862-47b3-94e9-1a6cf67f5b80", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64db747896", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c", Pod:"calico-kube-controllers-64db747896-slftr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif965e1a30d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.016 [INFO][5364] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.016 [INFO][5364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" iface="eth0" netns="" Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.016 [INFO][5364] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.016 [INFO][5364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.071 [INFO][5384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" HandleID="k8s-pod-network.020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.072 [INFO][5384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.072 [INFO][5384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.079 [WARNING][5384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" HandleID="k8s-pod-network.020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.079 [INFO][5384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" HandleID="k8s-pod-network.020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.081 [INFO][5384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:29.084732 containerd[1462]: 2025-04-30 03:30:29.083 [INFO][5364] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3" Apr 30 03:30:29.085928 containerd[1462]: time="2025-04-30T03:30:29.084750655Z" level=info msg="TearDown network for sandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\" successfully" Apr 30 03:30:29.090377 containerd[1462]: time="2025-04-30T03:30:29.090314352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:29.090554 containerd[1462]: time="2025-04-30T03:30:29.090516324Z" level=info msg="RemovePodSandbox \"020e519415680369aa5634998791c0dfd413029578a95489ac52b55c600f0ce3\" returns successfully" Apr 30 03:30:29.091354 containerd[1462]: time="2025-04-30T03:30:29.091311286Z" level=info msg="StopPodSandbox for \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\"" Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.142 [WARNING][5410] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"964497a6-75e1-47e5-836b-3b870a46fee8", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598", Pod:"coredns-7db6d8ff4d-gv4cp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c10d1c7d9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.142 [INFO][5410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.142 [INFO][5410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" iface="eth0" netns="" Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.142 [INFO][5410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.142 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.172 [INFO][5417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" HandleID="k8s-pod-network.a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.173 [INFO][5417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.173 [INFO][5417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.180 [WARNING][5417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" HandleID="k8s-pod-network.a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.180 [INFO][5417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" HandleID="k8s-pod-network.a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.181 [INFO][5417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:29.184493 containerd[1462]: 2025-04-30 03:30:29.183 [INFO][5410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:29.184493 containerd[1462]: time="2025-04-30T03:30:29.184443936Z" level=info msg="TearDown network for sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\" successfully" Apr 30 03:30:29.187337 containerd[1462]: time="2025-04-30T03:30:29.184499699Z" level=info msg="StopPodSandbox for \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\" returns successfully" Apr 30 03:30:29.187337 containerd[1462]: time="2025-04-30T03:30:29.185223757Z" level=info msg="RemovePodSandbox for \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\"" Apr 30 03:30:29.187337 containerd[1462]: time="2025-04-30T03:30:29.185266090Z" level=info msg="Forcibly stopping sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\"" Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.233 [WARNING][5435] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"964497a6-75e1-47e5-836b-3b870a46fee8", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"edd0bf8b096d44ff6f9a46c1e195d8d5cf7783588135977757484115d90b3598", Pod:"coredns-7db6d8ff4d-gv4cp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c10d1c7d9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.233 [INFO][5435] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.234 [INFO][5435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" iface="eth0" netns="" Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.234 [INFO][5435] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.234 [INFO][5435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.262 [INFO][5442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" HandleID="k8s-pod-network.a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.262 [INFO][5442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.262 [INFO][5442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.269 [WARNING][5442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" HandleID="k8s-pod-network.a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.269 [INFO][5442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" HandleID="k8s-pod-network.a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gv4cp-eth0" Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.271 [INFO][5442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:29.274104 containerd[1462]: 2025-04-30 03:30:29.272 [INFO][5435] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09" Apr 30 03:30:29.275457 containerd[1462]: time="2025-04-30T03:30:29.274162842Z" level=info msg="TearDown network for sandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\" successfully" Apr 30 03:30:29.279643 containerd[1462]: time="2025-04-30T03:30:29.279563344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:29.279855 containerd[1462]: time="2025-04-30T03:30:29.279689484Z" level=info msg="RemovePodSandbox \"a3293310c2655271584d935084260e44695fe3b8a521eb96897a583e44ec7e09\" returns successfully" Apr 30 03:30:29.280556 containerd[1462]: time="2025-04-30T03:30:29.280441772Z" level=info msg="StopPodSandbox for \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\"" Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.328 [WARNING][5460] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0", GenerateName:"calico-apiserver-74dfd89d4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"42a679bb-d883-4cd3-a4bf-74c95efe17a5", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74dfd89d4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560", Pod:"calico-apiserver-74dfd89d4c-ndtsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c5613a24a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.328 [INFO][5460] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.328 [INFO][5460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" iface="eth0" netns="" Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.328 [INFO][5460] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.328 [INFO][5460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.359 [INFO][5468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" HandleID="k8s-pod-network.89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.359 [INFO][5468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.359 [INFO][5468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.366 [WARNING][5468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" HandleID="k8s-pod-network.89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.366 [INFO][5468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" HandleID="k8s-pod-network.89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.368 [INFO][5468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:29.371566 containerd[1462]: 2025-04-30 03:30:29.369 [INFO][5460] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:29.372378 containerd[1462]: time="2025-04-30T03:30:29.371629024Z" level=info msg="TearDown network for sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\" successfully" Apr 30 03:30:29.372378 containerd[1462]: time="2025-04-30T03:30:29.371678030Z" level=info msg="StopPodSandbox for \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\" returns successfully" Apr 30 03:30:29.372657 containerd[1462]: time="2025-04-30T03:30:29.372520578Z" level=info msg="RemovePodSandbox for \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\"" Apr 30 03:30:29.372657 containerd[1462]: time="2025-04-30T03:30:29.372563845Z" level=info msg="Forcibly stopping sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\"" Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.429 [WARNING][5486] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0", GenerateName:"calico-apiserver-74dfd89d4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"42a679bb-d883-4cd3-a4bf-74c95efe17a5", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74dfd89d4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"d930ce3f8f89b791d882858c586ff08545ecb34eba0372d56823a06c14a06560", Pod:"calico-apiserver-74dfd89d4c-ndtsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c5613a24a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.430 [INFO][5486] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.430 [INFO][5486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" iface="eth0" netns="" Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.430 [INFO][5486] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.430 [INFO][5486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.459 [INFO][5493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" HandleID="k8s-pod-network.89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.459 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.459 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.468 [WARNING][5493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" HandleID="k8s-pod-network.89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.468 [INFO][5493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" HandleID="k8s-pod-network.89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--ndtsq-eth0" Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.471 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:29.474796 containerd[1462]: 2025-04-30 03:30:29.473 [INFO][5486] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960" Apr 30 03:30:29.475649 containerd[1462]: time="2025-04-30T03:30:29.474806364Z" level=info msg="TearDown network for sandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\" successfully" Apr 30 03:30:29.481296 containerd[1462]: time="2025-04-30T03:30:29.481195891Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:29.481773 containerd[1462]: time="2025-04-30T03:30:29.481332886Z" level=info msg="RemovePodSandbox \"89e1dba9ac63cd0bcd12d8111e108f722601547ea5af06ef3218642fdcf51960\" returns successfully" Apr 30 03:30:29.482262 containerd[1462]: time="2025-04-30T03:30:29.482180807Z" level=info msg="StopPodSandbox for \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\"" Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.532 [WARNING][5511] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b8dfc8e0-f268-4281-b376-50f8468daeb0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544", Pod:"coredns-7db6d8ff4d-j4zx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a18bcd96c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.533 [INFO][5511] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.533 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" iface="eth0" netns="" Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.533 [INFO][5511] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.533 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.559 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" HandleID="k8s-pod-network.967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.560 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.560 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.567 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" HandleID="k8s-pod-network.967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.567 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" HandleID="k8s-pod-network.967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.570 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:29.573392 containerd[1462]: 2025-04-30 03:30:29.571 [INFO][5511] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:29.574879 containerd[1462]: time="2025-04-30T03:30:29.573391912Z" level=info msg="TearDown network for sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\" successfully" Apr 30 03:30:29.574879 containerd[1462]: time="2025-04-30T03:30:29.573457879Z" level=info msg="StopPodSandbox for \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\" returns successfully" Apr 30 03:30:29.574879 containerd[1462]: time="2025-04-30T03:30:29.574287619Z" level=info msg="RemovePodSandbox for \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\"" Apr 30 03:30:29.574879 containerd[1462]: time="2025-04-30T03:30:29.574334116Z" level=info msg="Forcibly stopping sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\"" Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.624 [WARNING][5536] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b8dfc8e0-f268-4281-b376-50f8468daeb0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"11df4fec096eda308677fe5b98a748bd2b204fa4fcccafa4fddb71278d6a2544", Pod:"coredns-7db6d8ff4d-j4zx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a18bcd96c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.624 [INFO][5536] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.624 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" iface="eth0" netns="" Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.624 [INFO][5536] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.624 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.651 [INFO][5544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" HandleID="k8s-pod-network.967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.651 [INFO][5544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.651 [INFO][5544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.661 [WARNING][5544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" HandleID="k8s-pod-network.967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.661 [INFO][5544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" HandleID="k8s-pod-network.967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j4zx2-eth0" Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.663 [INFO][5544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:29.667208 containerd[1462]: 2025-04-30 03:30:29.665 [INFO][5536] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e" Apr 30 03:30:29.668342 containerd[1462]: time="2025-04-30T03:30:29.667227704Z" level=info msg="TearDown network for sandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\" successfully" Apr 30 03:30:29.673499 containerd[1462]: time="2025-04-30T03:30:29.673377716Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:29.673499 containerd[1462]: time="2025-04-30T03:30:29.673481026Z" level=info msg="RemovePodSandbox \"967b84852b086b18aeec6b393becae48afe8af6e026509587137571435fc557e\" returns successfully" Apr 30 03:30:30.806431 systemd[1]: Started sshd@7-10.128.0.99:22-139.178.68.195:51262.service - OpenSSH per-connection server daemon (139.178.68.195:51262). Apr 30 03:30:31.102957 sshd[5554]: Accepted publickey for core from 139.178.68.195 port 51262 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:30:31.105281 sshd[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:31.112752 systemd-logind[1443]: New session 8 of user core. Apr 30 03:30:31.117243 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:30:31.478991 sshd[5554]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:31.485570 systemd[1]: sshd@7-10.128.0.99:22-139.178.68.195:51262.service: Deactivated successfully. Apr 30 03:30:31.488645 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:30:31.490259 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:30:31.491812 systemd-logind[1443]: Removed session 8. Apr 30 03:30:32.719798 kubelet[2602]: I0430 03:30:32.719607 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:32.796101 kubelet[2602]: I0430 03:30:32.795865 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:32.807235 containerd[1462]: time="2025-04-30T03:30:32.803115557Z" level=info msg="StopContainer for \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\" with timeout 30 (s)" Apr 30 03:30:32.807235 containerd[1462]: time="2025-04-30T03:30:32.807051484Z" level=info msg="Stop container \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\" with signal terminated" Apr 30 03:30:32.878593 kubelet[2602]: I0430 03:30:32.878532 2602 topology_manager.go:215] "Topology Admit Handler" podUID="4822f20f-c705-4d23-bdfa-5a621c47c9d4" podNamespace="calico-apiserver" podName="calico-apiserver-74dfd89d4c-llmn2" Apr 30 03:30:32.895604 systemd[1]: Created slice kubepods-besteffort-pod4822f20f_c705_4d23_bdfa_5a621c47c9d4.slice - libcontainer container kubepods-besteffort-pod4822f20f_c705_4d23_bdfa_5a621c47c9d4.slice. Apr 30 03:30:32.900926 kubelet[2602]: I0430 03:30:32.899314 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcz6v\" (UniqueName: \"kubernetes.io/projected/4822f20f-c705-4d23-bdfa-5a621c47c9d4-kube-api-access-zcz6v\") pod \"calico-apiserver-74dfd89d4c-llmn2\" (UID: \"4822f20f-c705-4d23-bdfa-5a621c47c9d4\") " pod="calico-apiserver/calico-apiserver-74dfd89d4c-llmn2" Apr 30 03:30:32.900926 kubelet[2602]: I0430 03:30:32.899394 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4822f20f-c705-4d23-bdfa-5a621c47c9d4-calico-apiserver-certs\") pod \"calico-apiserver-74dfd89d4c-llmn2\" (UID: \"4822f20f-c705-4d23-bdfa-5a621c47c9d4\") " pod="calico-apiserver/calico-apiserver-74dfd89d4c-llmn2" Apr 30 03:30:32.910401 systemd[1]: cri-containerd-b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679.scope: Deactivated successfully. Apr 30 03:30:32.984175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679-rootfs.mount: Deactivated successfully. Apr 30 03:30:32.990201 containerd[1462]: time="2025-04-30T03:30:32.990054502Z" level=info msg="shim disconnected" id=b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679 namespace=k8s.io Apr 30 03:30:32.990493 containerd[1462]: time="2025-04-30T03:30:32.990260440Z" level=warning msg="cleaning up after shim disconnected" id=b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679 namespace=k8s.io Apr 30 03:30:32.990493 containerd[1462]: time="2025-04-30T03:30:32.990274404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:33.037008 containerd[1462]: time="2025-04-30T03:30:33.032998716Z" level=info msg="StopContainer for \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\" returns successfully" Apr 30 03:30:33.037181 containerd[1462]: time="2025-04-30T03:30:33.036995862Z" level=info msg="StopPodSandbox for \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\"" Apr 30 03:30:33.037181 containerd[1462]: time="2025-04-30T03:30:33.037068460Z" level=info msg="Container to stop \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:30:33.053335 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a-shm.mount: Deactivated successfully. Apr 30 03:30:33.059025 systemd[1]: cri-containerd-a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a.scope: Deactivated successfully. Apr 30 03:30:33.096007 containerd[1462]: time="2025-04-30T03:30:33.094880320Z" level=info msg="shim disconnected" id=a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a namespace=k8s.io Apr 30 03:30:33.096007 containerd[1462]: time="2025-04-30T03:30:33.095067213Z" level=warning msg="cleaning up after shim disconnected" id=a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a namespace=k8s.io Apr 30 03:30:33.096007 containerd[1462]: time="2025-04-30T03:30:33.095082776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:33.101966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a-rootfs.mount: Deactivated successfully. Apr 30 03:30:33.197211 systemd-networkd[1374]: caliec6a85e2859: Link DOWN Apr 30 03:30:33.199423 systemd-networkd[1374]: caliec6a85e2859: Lost carrier Apr 30 03:30:33.207767 containerd[1462]: time="2025-04-30T03:30:33.207710640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74dfd89d4c-llmn2,Uid:4822f20f-c705-4d23-bdfa-5a621c47c9d4,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.191 [INFO][5652] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.192 [INFO][5652] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" iface="eth0" netns="/var/run/netns/cni-fc2f4489-6697-82bf-4f25-dbe68f2925bf" Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.194 [INFO][5652] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" iface="eth0" netns="/var/run/netns/cni-fc2f4489-6697-82bf-4f25-dbe68f2925bf" Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.205 [INFO][5652] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" after=12.843833ms iface="eth0" netns="/var/run/netns/cni-fc2f4489-6697-82bf-4f25-dbe68f2925bf" Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.205 [INFO][5652] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.205 [INFO][5652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.280 [INFO][5662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.281 [INFO][5662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.281 [INFO][5662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.349 [INFO][5662] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.349 [INFO][5662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.351 [INFO][5662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:33.356287 containerd[1462]: 2025-04-30 03:30:33.354 [INFO][5652] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:30:33.357587 containerd[1462]: time="2025-04-30T03:30:33.356776406Z" level=info msg="TearDown network for sandbox \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\" successfully" Apr 30 03:30:33.357587 containerd[1462]: time="2025-04-30T03:30:33.356846154Z" level=info msg="StopPodSandbox for \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\" returns successfully" Apr 30 03:30:33.407104 kubelet[2602]: I0430 03:30:33.404492 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bbf289e-3bd8-4b3c-9652-ef642934c0ca-calico-apiserver-certs\") pod \"9bbf289e-3bd8-4b3c-9652-ef642934c0ca\" (UID: \"9bbf289e-3bd8-4b3c-9652-ef642934c0ca\") " Apr 30 03:30:33.408260 kubelet[2602]: I0430 03:30:33.407972 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzbdz\" (UniqueName: \"kubernetes.io/projected/9bbf289e-3bd8-4b3c-9652-ef642934c0ca-kube-api-access-jzbdz\") pod \"9bbf289e-3bd8-4b3c-9652-ef642934c0ca\" (UID: \"9bbf289e-3bd8-4b3c-9652-ef642934c0ca\") " Apr 30 03:30:33.413635 kubelet[2602]: I0430 03:30:33.413470 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bbf289e-3bd8-4b3c-9652-ef642934c0ca-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9bbf289e-3bd8-4b3c-9652-ef642934c0ca" (UID: "9bbf289e-3bd8-4b3c-9652-ef642934c0ca"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:30:33.419008 kubelet[2602]: I0430 03:30:33.418808 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bbf289e-3bd8-4b3c-9652-ef642934c0ca-kube-api-access-jzbdz" (OuterVolumeSpecName: "kube-api-access-jzbdz") pod "9bbf289e-3bd8-4b3c-9652-ef642934c0ca" (UID: "9bbf289e-3bd8-4b3c-9652-ef642934c0ca"). InnerVolumeSpecName "kube-api-access-jzbdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:30:33.459657 systemd-networkd[1374]: cali5a58d29146c: Link UP Apr 30 03:30:33.460443 systemd-networkd[1374]: cali5a58d29146c: Gained carrier Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.317 [INFO][5670] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0 calico-apiserver-74dfd89d4c- calico-apiserver 4822f20f-c705-4d23-bdfa-5a621c47c9d4 982 0 2025-04-30 03:30:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74dfd89d4c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal calico-apiserver-74dfd89d4c-llmn2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5a58d29146c [] []}} ContainerID="945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-llmn2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.318 [INFO][5670] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-llmn2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.383 [INFO][5685] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" HandleID="k8s-pod-network.945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.415 [INFO][5685] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" HandleID="k8s-pod-network.945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b250), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", "pod":"calico-apiserver-74dfd89d4c-llmn2", "timestamp":"2025-04-30 03:30:33.383492332 +0000 UTC"}, Hostname:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.416 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.416 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.417 [INFO][5685] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal' Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.419 [INFO][5685] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.425 [INFO][5685] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.430 [INFO][5685] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.432 [INFO][5685] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.435 [INFO][5685] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.435 [INFO][5685] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.437 [INFO][5685] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.442 [INFO][5685] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.452 [INFO][5685] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.200/26] block=192.168.18.192/26 handle="k8s-pod-network.945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.452 [INFO][5685] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.200/26] handle="k8s-pod-network.945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.452 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:33.478938 containerd[1462]: 2025-04-30 03:30:33.452 [INFO][5685] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.200/26] IPv6=[] ContainerID="945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" HandleID="k8s-pod-network.945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0" Apr 30 03:30:33.484147 containerd[1462]: 2025-04-30 03:30:33.454 [INFO][5670] cni-plugin/k8s.go 386: Populated endpoint ContainerID="945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-llmn2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0", GenerateName:"calico-apiserver-74dfd89d4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4822f20f-c705-4d23-bdfa-5a621c47c9d4", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 30, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74dfd89d4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-74dfd89d4c-llmn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a58d29146c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:33.484147 containerd[1462]: 2025-04-30 03:30:33.454 [INFO][5670] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.200/32] ContainerID="945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-llmn2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0" Apr 30 03:30:33.484147 containerd[1462]: 2025-04-30 03:30:33.454 [INFO][5670] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a58d29146c ContainerID="945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-llmn2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0" Apr 30 03:30:33.484147 containerd[1462]: 2025-04-30 03:30:33.457 [INFO][5670] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-llmn2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0" Apr 30 03:30:33.484147 containerd[1462]: 2025-04-30 03:30:33.458 [INFO][5670] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-llmn2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0", GenerateName:"calico-apiserver-74dfd89d4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4822f20f-c705-4d23-bdfa-5a621c47c9d4", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 30, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74dfd89d4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea", Pod:"calico-apiserver-74dfd89d4c-llmn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a58d29146c", MAC:"fe:02:06:27:d8:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:33.484147 containerd[1462]: 2025-04-30 03:30:33.473 [INFO][5670] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea" Namespace="calico-apiserver" Pod="calico-apiserver-74dfd89d4c-llmn2" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74dfd89d4c--llmn2-eth0" Apr 30 03:30:33.509101 kubelet[2602]: I0430 03:30:33.509042 2602 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bbf289e-3bd8-4b3c-9652-ef642934c0ca-calico-apiserver-certs\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:30:33.509101 kubelet[2602]: I0430 03:30:33.509099 2602 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jzbdz\" (UniqueName: \"kubernetes.io/projected/9bbf289e-3bd8-4b3c-9652-ef642934c0ca-kube-api-access-jzbdz\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:30:33.525533 containerd[1462]: time="2025-04-30T03:30:33.525389978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:33.525533 containerd[1462]: time="2025-04-30T03:30:33.525467713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:33.525533 containerd[1462]: time="2025-04-30T03:30:33.525486842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:33.526026 containerd[1462]: time="2025-04-30T03:30:33.525610385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:33.550149 systemd[1]: Started cri-containerd-945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea.scope - libcontainer container 945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea. Apr 30 03:30:33.614350 containerd[1462]: time="2025-04-30T03:30:33.614152581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74dfd89d4c-llmn2,Uid:4822f20f-c705-4d23-bdfa-5a621c47c9d4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea\"" Apr 30 03:30:33.620671 containerd[1462]: time="2025-04-30T03:30:33.620619333Z" level=info msg="CreateContainer within sandbox \"945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:30:33.635220 containerd[1462]: time="2025-04-30T03:30:33.635148653Z" level=info msg="CreateContainer within sandbox \"945927a6937947176784cd0498f9a2573a7ee342b27162cc88b913372de8eeea\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5fa393a1d81474725ec518992e9db87180a1c3bb6c73bfeae696f3170b7aba3d\"" Apr 30 03:30:33.636207 containerd[1462]: time="2025-04-30T03:30:33.636100434Z" level=info msg="StartContainer for \"5fa393a1d81474725ec518992e9db87180a1c3bb6c73bfeae696f3170b7aba3d\"" Apr 30 03:30:33.675172 systemd[1]: Started cri-containerd-5fa393a1d81474725ec518992e9db87180a1c3bb6c73bfeae696f3170b7aba3d.scope - libcontainer container 5fa393a1d81474725ec518992e9db87180a1c3bb6c73bfeae696f3170b7aba3d. Apr 30 03:30:33.700086 kubelet[2602]: I0430 03:30:33.700027 2602 scope.go:117] "RemoveContainer" containerID="b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679" Apr 30 03:30:33.720523 containerd[1462]: time="2025-04-30T03:30:33.720464007Z" level=info msg="RemoveContainer for \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\"" Apr 30 03:30:33.722631 systemd[1]: Removed slice kubepods-besteffort-pod9bbf289e_3bd8_4b3c_9652_ef642934c0ca.slice - libcontainer container kubepods-besteffort-pod9bbf289e_3bd8_4b3c_9652_ef642934c0ca.slice. Apr 30 03:30:33.722816 systemd[1]: kubepods-besteffort-pod9bbf289e_3bd8_4b3c_9652_ef642934c0ca.slice: Consumed 1.043s CPU time. Apr 30 03:30:33.749964 containerd[1462]: time="2025-04-30T03:30:33.748148978Z" level=info msg="RemoveContainer for \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\" returns successfully" Apr 30 03:30:33.754566 kubelet[2602]: I0430 03:30:33.754511 2602 scope.go:117] "RemoveContainer" containerID="b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679" Apr 30 03:30:33.757161 containerd[1462]: time="2025-04-30T03:30:33.757089609Z" level=error msg="ContainerStatus for \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\": not found" Apr 30 03:30:33.757819 kubelet[2602]: E0430 03:30:33.757708 2602 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\": not found" containerID="b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679" Apr 30 03:30:33.758075 kubelet[2602]: I0430 03:30:33.758005 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679"} err="failed to get container status \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\": rpc error: code = NotFound desc = an error occurred when try to find container \"b17e0c75bfc5a26b4ab59fc76bb242bf7baf744c09de3d82618b7202f8a6b679\": not found" Apr 30 03:30:33.784518 containerd[1462]: time="2025-04-30T03:30:33.784308304Z" level=info msg="StartContainer for \"5fa393a1d81474725ec518992e9db87180a1c3bb6c73bfeae696f3170b7aba3d\" returns successfully" Apr 30 03:30:33.996061 systemd[1]: run-netns-cni\x2dfc2f4489\x2d6697\x2d82bf\x2d4f25\x2ddbe68f2925bf.mount: Deactivated successfully. Apr 30 03:30:33.996219 systemd[1]: var-lib-kubelet-pods-9bbf289e\x2d3bd8\x2d4b3c\x2d9652\x2def642934c0ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djzbdz.mount: Deactivated successfully. Apr 30 03:30:33.996345 systemd[1]: var-lib-kubelet-pods-9bbf289e\x2d3bd8\x2d4b3c\x2d9652\x2def642934c0ca-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Apr 30 03:30:34.194661 kubelet[2602]: I0430 03:30:34.194612 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bbf289e-3bd8-4b3c-9652-ef642934c0ca" path="/var/lib/kubelet/pods/9bbf289e-3bd8-4b3c-9652-ef642934c0ca/volumes" Apr 30 03:30:34.770212 kubelet[2602]: I0430 03:30:34.770051 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74dfd89d4c-llmn2" podStartSLOduration=2.7700271560000003 podStartE2EDuration="2.770027156s" podCreationTimestamp="2025-04-30 03:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:30:34.76964034 +0000 UTC m=+66.744282088" watchObservedRunningTime="2025-04-30 03:30:34.770027156 +0000 UTC m=+66.744668905" Apr 30 03:30:35.075023 systemd-networkd[1374]: cali5a58d29146c: Gained IPv6LL Apr 30 03:30:35.121569 containerd[1462]: time="2025-04-30T03:30:35.121511356Z" level=info msg="StopContainer for \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\" with timeout 30 (s)" Apr 30 03:30:35.122796 containerd[1462]: time="2025-04-30T03:30:35.122747185Z" level=info msg="Stop container \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\" with signal terminated" Apr 30 03:30:35.166295 systemd[1]: cri-containerd-266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102.scope: Deactivated successfully. Apr 30 03:30:35.166827 systemd[1]: cri-containerd-266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102.scope: Consumed 1.441s CPU time. Apr 30 03:30:35.228306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102-rootfs.mount: Deactivated successfully. Apr 30 03:30:35.241367 containerd[1462]: time="2025-04-30T03:30:35.241276068Z" level=info msg="shim disconnected" id=266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102 namespace=k8s.io Apr 30 03:30:35.241367 containerd[1462]: time="2025-04-30T03:30:35.241367478Z" level=warning msg="cleaning up after shim disconnected" id=266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102 namespace=k8s.io Apr 30 03:30:35.242044 containerd[1462]: time="2025-04-30T03:30:35.241381582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:35.272053 containerd[1462]: time="2025-04-30T03:30:35.271698495Z" level=info msg="StopContainer for \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\" returns successfully" Apr 30 03:30:35.272473 containerd[1462]: time="2025-04-30T03:30:35.272428454Z" level=info msg="StopPodSandbox for \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\"" Apr 30 03:30:35.272841 containerd[1462]: time="2025-04-30T03:30:35.272688761Z" level=info msg="Container to stop \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:30:35.281793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37-shm.mount: Deactivated successfully. Apr 30 03:30:35.289799 systemd[1]: cri-containerd-bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37.scope: Deactivated successfully. Apr 30 03:30:35.332549 containerd[1462]: time="2025-04-30T03:30:35.332092339Z" level=info msg="shim disconnected" id=bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37 namespace=k8s.io Apr 30 03:30:35.332549 containerd[1462]: time="2025-04-30T03:30:35.332165568Z" level=warning msg="cleaning up after shim disconnected" id=bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37 namespace=k8s.io Apr 30 03:30:35.332549 containerd[1462]: time="2025-04-30T03:30:35.332180294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:35.343750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37-rootfs.mount: Deactivated successfully. Apr 30 03:30:35.359506 containerd[1462]: time="2025-04-30T03:30:35.359441610Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:30:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:30:35.433049 systemd-networkd[1374]: cali80772b383b0: Link DOWN Apr 30 03:30:35.433440 systemd-networkd[1374]: cali80772b383b0: Lost carrier Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.428 [INFO][5864] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.430 [INFO][5864] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" iface="eth0" netns="/var/run/netns/cni-3515592b-b69f-29c5-3306-bfd3cbd4d38f" Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.430 [INFO][5864] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" iface="eth0" netns="/var/run/netns/cni-3515592b-b69f-29c5-3306-bfd3cbd4d38f" Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.439 [INFO][5864] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" after=8.87957ms iface="eth0" netns="/var/run/netns/cni-3515592b-b69f-29c5-3306-bfd3cbd4d38f" Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.439 [INFO][5864] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.439 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.481 [INFO][5873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.481 [INFO][5873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.481 [INFO][5873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.538 [INFO][5873] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.538 [INFO][5873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.540 [INFO][5873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:35.543584 containerd[1462]: 2025-04-30 03:30:35.542 [INFO][5864] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:30:35.546693 containerd[1462]: time="2025-04-30T03:30:35.546052905Z" level=info msg="TearDown network for sandbox \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\" successfully" Apr 30 03:30:35.546693 containerd[1462]: time="2025-04-30T03:30:35.546104647Z" level=info msg="StopPodSandbox for \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\" returns successfully" Apr 30 03:30:35.552572 systemd[1]: run-netns-cni\x2d3515592b\x2db69f\x2d29c5\x2d3306\x2dbfd3cbd4d38f.mount: Deactivated successfully. Apr 30 03:30:35.625822 kubelet[2602]: I0430 03:30:35.625100 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/87f3a9e5-5fac-471b-a36c-1452742abca5-calico-apiserver-certs\") pod \"87f3a9e5-5fac-471b-a36c-1452742abca5\" (UID: \"87f3a9e5-5fac-471b-a36c-1452742abca5\") " Apr 30 03:30:35.625822 kubelet[2602]: I0430 03:30:35.625220 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hpj5\" (UniqueName: \"kubernetes.io/projected/87f3a9e5-5fac-471b-a36c-1452742abca5-kube-api-access-7hpj5\") pod \"87f3a9e5-5fac-471b-a36c-1452742abca5\" (UID: \"87f3a9e5-5fac-471b-a36c-1452742abca5\") " Apr 30 03:30:35.632902 kubelet[2602]: I0430 03:30:35.632823 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f3a9e5-5fac-471b-a36c-1452742abca5-kube-api-access-7hpj5" (OuterVolumeSpecName: "kube-api-access-7hpj5") pod "87f3a9e5-5fac-471b-a36c-1452742abca5" (UID: "87f3a9e5-5fac-471b-a36c-1452742abca5"). InnerVolumeSpecName "kube-api-access-7hpj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:30:35.637150 kubelet[2602]: I0430 03:30:35.637092 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f3a9e5-5fac-471b-a36c-1452742abca5-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "87f3a9e5-5fac-471b-a36c-1452742abca5" (UID: "87f3a9e5-5fac-471b-a36c-1452742abca5"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:30:35.637810 systemd[1]: var-lib-kubelet-pods-87f3a9e5\x2d5fac\x2d471b\x2da36c\x2d1452742abca5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7hpj5.mount: Deactivated successfully. Apr 30 03:30:35.727247 kubelet[2602]: I0430 03:30:35.726554 2602 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7hpj5\" (UniqueName: \"kubernetes.io/projected/87f3a9e5-5fac-471b-a36c-1452742abca5-kube-api-access-7hpj5\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:30:35.727247 kubelet[2602]: I0430 03:30:35.726612 2602 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/87f3a9e5-5fac-471b-a36c-1452742abca5-calico-apiserver-certs\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:30:35.764920 kubelet[2602]: I0430 03:30:35.764377 2602 scope.go:117] "RemoveContainer" containerID="266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102" Apr 30 03:30:35.769169 containerd[1462]: time="2025-04-30T03:30:35.767256066Z" level=info msg="RemoveContainer for \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\"" Apr 30 03:30:35.777181 containerd[1462]: time="2025-04-30T03:30:35.776955251Z" level=info msg="RemoveContainer for \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\" returns successfully" Apr 30 03:30:35.777563 kubelet[2602]: I0430 03:30:35.777517 2602 scope.go:117] "RemoveContainer" containerID="266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102" Apr 30 03:30:35.778378 containerd[1462]: time="2025-04-30T03:30:35.778260649Z" level=error msg="ContainerStatus for \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\": not found" Apr 30 03:30:35.780018 kubelet[2602]: E0430 03:30:35.779945 2602 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\": not found" containerID="266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102" Apr 30 03:30:35.780018 kubelet[2602]: I0430 03:30:35.779995 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102"} err="failed to get container status \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\": rpc error: code = NotFound desc = an error occurred when try to find container \"266d3e25c47cf518743d592a3a029dcc71ecc400e836ad45f8fed3ae95570102\": not found" Apr 30 03:30:35.780122 systemd[1]: Removed slice kubepods-besteffort-pod87f3a9e5_5fac_471b_a36c_1452742abca5.slice - libcontainer container kubepods-besteffort-pod87f3a9e5_5fac_471b_a36c_1452742abca5.slice. Apr 30 03:30:35.780310 systemd[1]: kubepods-besteffort-pod87f3a9e5_5fac_471b_a36c_1452742abca5.slice: Consumed 1.486s CPU time. Apr 30 03:30:36.199099 kubelet[2602]: I0430 03:30:36.198874 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f3a9e5-5fac-471b-a36c-1452742abca5" path="/var/lib/kubelet/pods/87f3a9e5-5fac-471b-a36c-1452742abca5/volumes" Apr 30 03:30:36.222329 systemd[1]: var-lib-kubelet-pods-87f3a9e5\x2d5fac\x2d471b\x2da36c\x2d1452742abca5-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Apr 30 03:30:36.537471 systemd[1]: Started sshd@8-10.128.0.99:22-139.178.68.195:48642.service - OpenSSH per-connection server daemon (139.178.68.195:48642). Apr 30 03:30:36.830665 sshd[5910]: Accepted publickey for core from 139.178.68.195 port 48642 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:30:36.833120 sshd[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:36.839984 systemd-logind[1443]: New session 9 of user core. Apr 30 03:30:36.843136 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:30:37.125878 sshd[5910]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:37.131796 systemd[1]: sshd@8-10.128.0.99:22-139.178.68.195:48642.service: Deactivated successfully. Apr 30 03:30:37.135470 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:30:37.138001 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:30:37.139935 systemd-logind[1443]: Removed session 9. Apr 30 03:30:37.553773 ntpd[1431]: Listen normally on 16 cali5a58d29146c [fe80::ecee:eeff:feee:eeee%14]:123 Apr 30 03:30:37.553877 ntpd[1431]: Deleting interface #11 caliec6a85e2859, fe80::ecee:eeff:feee:eeee%9#123, interface stats: received=0, sent=0, dropped=0, active_time=18 secs Apr 30 03:30:37.554540 ntpd[1431]: 30 Apr 03:30:37 ntpd[1431]: Listen normally on 16 cali5a58d29146c [fe80::ecee:eeff:feee:eeee%14]:123 Apr 30 03:30:37.554540 ntpd[1431]: 30 Apr 03:30:37 ntpd[1431]: Deleting interface #11 caliec6a85e2859, fe80::ecee:eeff:feee:eeee%9#123, interface stats: received=0, sent=0, dropped=0, active_time=18 secs Apr 30 03:30:37.554540 ntpd[1431]: 30 Apr 03:30:37 ntpd[1431]: Deleting interface #13 cali80772b383b0, fe80::ecee:eeff:feee:eeee%11#123, interface stats: received=0, sent=0, dropped=0, active_time=18 secs Apr 30 03:30:37.553996 ntpd[1431]: Deleting interface #13 cali80772b383b0, fe80::ecee:eeff:feee:eeee%11#123, interface stats: received=0, sent=0, dropped=0, active_time=18 secs Apr 30 03:30:42.184385 systemd[1]: Started sshd@9-10.128.0.99:22-139.178.68.195:48658.service - OpenSSH per-connection server daemon (139.178.68.195:48658). Apr 30 03:30:42.484010 sshd[5926]: Accepted publickey for core from 139.178.68.195 port 48658 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:30:42.486801 sshd[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:42.493213 systemd-logind[1443]: New session 10 of user core. Apr 30 03:30:42.504281 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:30:42.780669 sshd[5926]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:42.789415 systemd[1]: sshd@9-10.128.0.99:22-139.178.68.195:48658.service: Deactivated successfully. Apr 30 03:30:42.792600 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:30:42.793803 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:30:42.795662 systemd-logind[1443]: Removed session 10. Apr 30 03:30:42.837367 systemd[1]: Started sshd@10-10.128.0.99:22-139.178.68.195:48668.service - OpenSSH per-connection server daemon (139.178.68.195:48668). Apr 30 03:30:43.136378 sshd[5940]: Accepted publickey for core from 139.178.68.195 port 48668 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:30:43.138815 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:43.145745 systemd-logind[1443]: New session 11 of user core. Apr 30 03:30:43.149178 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:30:43.492870 sshd[5940]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:43.498872 systemd[1]: sshd@10-10.128.0.99:22-139.178.68.195:48668.service: Deactivated successfully. Apr 30 03:30:43.502096 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:30:43.503366 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:30:43.505756 systemd-logind[1443]: Removed session 11. Apr 30 03:30:43.555376 systemd[1]: Started sshd@11-10.128.0.99:22-139.178.68.195:48676.service - OpenSSH per-connection server daemon (139.178.68.195:48676). Apr 30 03:30:43.839108 sshd[5951]: Accepted publickey for core from 139.178.68.195 port 48676 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:30:43.841276 sshd[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:43.848171 systemd-logind[1443]: New session 12 of user core. Apr 30 03:30:43.853228 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:30:44.134527 sshd[5951]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:44.141310 systemd[1]: sshd@11-10.128.0.99:22-139.178.68.195:48676.service: Deactivated successfully. Apr 30 03:30:44.144342 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:30:44.145745 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:30:44.147741 systemd-logind[1443]: Removed session 12. Apr 30 03:30:49.191317 systemd[1]: Started sshd@12-10.128.0.99:22-139.178.68.195:60526.service - OpenSSH per-connection server daemon (139.178.68.195:60526). Apr 30 03:30:49.493547 sshd[5968]: Accepted publickey for core from 139.178.68.195 port 60526 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:30:49.492590 sshd[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:49.503132 systemd-logind[1443]: New session 13 of user core. Apr 30 03:30:49.509156 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:30:49.874921 sshd[5968]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:49.885572 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:30:49.889588 systemd[1]: sshd@12-10.128.0.99:22-139.178.68.195:60526.service: Deactivated successfully. Apr 30 03:30:49.896009 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:30:49.901267 systemd-logind[1443]: Removed session 13. Apr 30 03:30:50.085557 containerd[1462]: time="2025-04-30T03:30:50.085344237Z" level=info msg="StopContainer for \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\" with timeout 300 (s)" Apr 30 03:30:50.087117 containerd[1462]: time="2025-04-30T03:30:50.086570661Z" level=info msg="Stop container \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\" with signal terminated" Apr 30 03:30:50.334761 containerd[1462]: time="2025-04-30T03:30:50.334021185Z" level=info msg="StopContainer for \"8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49\" with timeout 30 (s)" Apr 30 03:30:50.336278 containerd[1462]: time="2025-04-30T03:30:50.336036294Z" level=info msg="Stop container \"8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49\" with signal terminated" Apr 30 03:30:50.414812 systemd[1]: cri-containerd-8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49.scope: Deactivated successfully. Apr 30 03:30:50.472053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49-rootfs.mount: Deactivated successfully. Apr 30 03:30:50.494697 containerd[1462]: time="2025-04-30T03:30:50.494255989Z" level=info msg="shim disconnected" id=8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49 namespace=k8s.io Apr 30 03:30:50.494697 containerd[1462]: time="2025-04-30T03:30:50.494655144Z" level=warning msg="cleaning up after shim disconnected" id=8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49 namespace=k8s.io Apr 30 03:30:50.494697 containerd[1462]: time="2025-04-30T03:30:50.494675746Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:50.533667 containerd[1462]: time="2025-04-30T03:30:50.532428027Z" level=info msg="StopContainer for \"8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49\" returns successfully" Apr 30 03:30:50.535560 containerd[1462]: time="2025-04-30T03:30:50.535510293Z" level=info msg="StopPodSandbox for \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\"" Apr 30 03:30:50.535748 containerd[1462]: time="2025-04-30T03:30:50.535608999Z" level=info msg="Container to stop \"8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:30:50.546532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c-shm.mount: Deactivated successfully. Apr 30 03:30:50.566221 systemd[1]: cri-containerd-9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c.scope: Deactivated successfully. Apr 30 03:30:50.619306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c-rootfs.mount: Deactivated successfully. Apr 30 03:30:50.624220 containerd[1462]: time="2025-04-30T03:30:50.623874264Z" level=info msg="shim disconnected" id=9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c namespace=k8s.io Apr 30 03:30:50.624220 containerd[1462]: time="2025-04-30T03:30:50.623965251Z" level=warning msg="cleaning up after shim disconnected" id=9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c namespace=k8s.io Apr 30 03:30:50.624220 containerd[1462]: time="2025-04-30T03:30:50.623980707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:50.737877 systemd-networkd[1374]: calif965e1a30d6: Link DOWN Apr 30 03:30:50.737908 systemd-networkd[1374]: calif965e1a30d6: Lost carrier Apr 30 03:30:50.816756 kubelet[2602]: I0430 03:30:50.816132 2602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.735 [INFO][6070] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.735 [INFO][6070] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" iface="eth0" netns="/var/run/netns/cni-661e5ea3-7d57-d715-63a2-2fc5df13b305" Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.737 [INFO][6070] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" iface="eth0" netns="/var/run/netns/cni-661e5ea3-7d57-d715-63a2-2fc5df13b305" Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.746 [INFO][6070] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" after=10.537476ms iface="eth0" netns="/var/run/netns/cni-661e5ea3-7d57-d715-63a2-2fc5df13b305" Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.746 [INFO][6070] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.746 [INFO][6070] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.795 [INFO][6078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.795 [INFO][6078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.795 [INFO][6078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.855 [INFO][6078] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.857 [INFO][6078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.860 [INFO][6078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:50.865851 containerd[1462]: 2025-04-30 03:30:50.862 [INFO][6070] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:30:50.870138 containerd[1462]: time="2025-04-30T03:30:50.870005033Z" level=info msg="TearDown network for sandbox \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\" successfully" Apr 30 03:30:50.870138 containerd[1462]: time="2025-04-30T03:30:50.870060833Z" level=info msg="StopPodSandbox for \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\" returns successfully" Apr 30 03:30:50.882531 systemd[1]: run-netns-cni\x2d661e5ea3\x2d7d57\x2dd715\x2d63a2\x2d2fc5df13b305.mount: Deactivated successfully. Apr 30 03:30:50.943147 kubelet[2602]: I0430 03:30:50.942521 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4490f834-9862-47b3-94e9-1a6cf67f5b80-tigera-ca-bundle\") pod \"4490f834-9862-47b3-94e9-1a6cf67f5b80\" (UID: \"4490f834-9862-47b3-94e9-1a6cf67f5b80\") " Apr 30 03:30:50.943147 kubelet[2602]: I0430 03:30:50.942592 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc6xm\" (UniqueName: \"kubernetes.io/projected/4490f834-9862-47b3-94e9-1a6cf67f5b80-kube-api-access-tc6xm\") pod \"4490f834-9862-47b3-94e9-1a6cf67f5b80\" (UID: \"4490f834-9862-47b3-94e9-1a6cf67f5b80\") " Apr 30 03:30:50.951275 kubelet[2602]: I0430 03:30:50.951216 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4490f834-9862-47b3-94e9-1a6cf67f5b80-kube-api-access-tc6xm" (OuterVolumeSpecName: "kube-api-access-tc6xm") pod "4490f834-9862-47b3-94e9-1a6cf67f5b80" (UID: "4490f834-9862-47b3-94e9-1a6cf67f5b80"). InnerVolumeSpecName "kube-api-access-tc6xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:30:50.962407 systemd[1]: var-lib-kubelet-pods-4490f834\x2d9862\x2d47b3\x2d94e9\x2d1a6cf67f5b80-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtc6xm.mount: Deactivated successfully. Apr 30 03:30:50.965422 kubelet[2602]: I0430 03:30:50.965359 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4490f834-9862-47b3-94e9-1a6cf67f5b80-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "4490f834-9862-47b3-94e9-1a6cf67f5b80" (UID: "4490f834-9862-47b3-94e9-1a6cf67f5b80"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:30:51.043651 kubelet[2602]: I0430 03:30:51.043506 2602 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4490f834-9862-47b3-94e9-1a6cf67f5b80-tigera-ca-bundle\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:30:51.043651 kubelet[2602]: I0430 03:30:51.043562 2602 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tc6xm\" (UniqueName: \"kubernetes.io/projected/4490f834-9862-47b3-94e9-1a6cf67f5b80-kube-api-access-tc6xm\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:30:51.468506 systemd[1]: var-lib-kubelet-pods-4490f834\x2d9862\x2d47b3\x2d94e9\x2d1a6cf67f5b80-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Apr 30 03:30:51.826915 systemd[1]: Removed slice kubepods-besteffort-pod4490f834_9862_47b3_94e9_1a6cf67f5b80.slice - libcontainer container kubepods-besteffort-pod4490f834_9862_47b3_94e9_1a6cf67f5b80.slice. Apr 30 03:30:51.878236 kubelet[2602]: I0430 03:30:51.878172 2602 topology_manager.go:215] "Topology Admit Handler" podUID="9ac779f5-f888-45a0-b421-fdea7435951c" podNamespace="calico-system" podName="calico-kube-controllers-6558588999-s9fnw" Apr 30 03:30:51.878839 kubelet[2602]: E0430 03:30:51.878281 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9bbf289e-3bd8-4b3c-9652-ef642934c0ca" containerName="calico-apiserver" Apr 30 03:30:51.878839 kubelet[2602]: E0430 03:30:51.878298 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4490f834-9862-47b3-94e9-1a6cf67f5b80" containerName="calico-kube-controllers" Apr 30 03:30:51.878839 kubelet[2602]: E0430 03:30:51.878310 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87f3a9e5-5fac-471b-a36c-1452742abca5" containerName="calico-apiserver" Apr 30 03:30:51.878839 kubelet[2602]: I0430 03:30:51.878355 2602 memory_manager.go:354] "RemoveStaleState removing state" podUID="4490f834-9862-47b3-94e9-1a6cf67f5b80" containerName="calico-kube-controllers" Apr 30 03:30:51.878839 kubelet[2602]: I0430 03:30:51.878369 2602 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bbf289e-3bd8-4b3c-9652-ef642934c0ca" containerName="calico-apiserver" Apr 30 03:30:51.878839 kubelet[2602]: I0430 03:30:51.878380 2602 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f3a9e5-5fac-471b-a36c-1452742abca5" containerName="calico-apiserver" Apr 30 03:30:51.896131 systemd[1]: Created slice kubepods-besteffort-pod9ac779f5_f888_45a0_b421_fdea7435951c.slice - libcontainer container kubepods-besteffort-pod9ac779f5_f888_45a0_b421_fdea7435951c.slice. Apr 30 03:30:51.950308 kubelet[2602]: I0430 03:30:51.950239 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hptt\" (UniqueName: \"kubernetes.io/projected/9ac779f5-f888-45a0-b421-fdea7435951c-kube-api-access-4hptt\") pod \"calico-kube-controllers-6558588999-s9fnw\" (UID: \"9ac779f5-f888-45a0-b421-fdea7435951c\") " pod="calico-system/calico-kube-controllers-6558588999-s9fnw" Apr 30 03:30:51.950625 kubelet[2602]: I0430 03:30:51.950350 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac779f5-f888-45a0-b421-fdea7435951c-tigera-ca-bundle\") pod \"calico-kube-controllers-6558588999-s9fnw\" (UID: \"9ac779f5-f888-45a0-b421-fdea7435951c\") " pod="calico-system/calico-kube-controllers-6558588999-s9fnw" Apr 30 03:30:52.193276 kubelet[2602]: I0430 03:30:52.192738 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4490f834-9862-47b3-94e9-1a6cf67f5b80" path="/var/lib/kubelet/pods/4490f834-9862-47b3-94e9-1a6cf67f5b80/volumes" Apr 30 03:30:52.202504 containerd[1462]: time="2025-04-30T03:30:52.202424362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6558588999-s9fnw,Uid:9ac779f5-f888-45a0-b421-fdea7435951c,Namespace:calico-system,Attempt:0,}" Apr 30 03:30:52.491137 systemd-networkd[1374]: calif356528ae18: Link UP Apr 30 03:30:52.491551 systemd-networkd[1374]: calif356528ae18: Gained carrier Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.335 [INFO][6115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0 calico-kube-controllers-6558588999- calico-system 9ac779f5-f888-45a0-b421-fdea7435951c 1207 0 2025-04-30 03:30:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6558588999 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal calico-kube-controllers-6558588999-s9fnw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif356528ae18 [] []}} ContainerID="5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" Namespace="calico-system" Pod="calico-kube-controllers-6558588999-s9fnw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.336 [INFO][6115] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" Namespace="calico-system" Pod="calico-kube-controllers-6558588999-s9fnw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.416 [INFO][6127] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" HandleID="k8s-pod-network.5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.438 [INFO][6127] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" HandleID="k8s-pod-network.5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031c190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", "pod":"calico-kube-controllers-6558588999-s9fnw", "timestamp":"2025-04-30 03:30:52.416215327 +0000 UTC"}, Hostname:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.438 [INFO][6127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.438 [INFO][6127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.438 [INFO][6127] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal' Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.441 [INFO][6127] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.449 [INFO][6127] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.454 [INFO][6127] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.457 [INFO][6127] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.461 [INFO][6127] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.461 [INFO][6127] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.464 [INFO][6127] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85 Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.469 [INFO][6127] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.480 [INFO][6127] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.201/26] block=192.168.18.192/26 handle="k8s-pod-network.5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.480 [INFO][6127] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.201/26] handle="k8s-pod-network.5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" host="ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal" Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.480 [INFO][6127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:52.515641 containerd[1462]: 2025-04-30 03:30:52.480 [INFO][6127] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.201/26] IPv6=[] ContainerID="5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" HandleID="k8s-pod-network.5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0" Apr 30 03:30:52.519412 containerd[1462]: 2025-04-30 03:30:52.484 [INFO][6115] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" Namespace="calico-system" Pod="calico-kube-controllers-6558588999-s9fnw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0", GenerateName:"calico-kube-controllers-6558588999-", Namespace:"calico-system", SelfLink:"", UID:"9ac779f5-f888-45a0-b421-fdea7435951c", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 30, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6558588999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-6558588999-s9fnw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif356528ae18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:52.519412 containerd[1462]: 2025-04-30 03:30:52.484 [INFO][6115] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.201/32] ContainerID="5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" Namespace="calico-system" Pod="calico-kube-controllers-6558588999-s9fnw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0" Apr 30 03:30:52.519412 containerd[1462]: 2025-04-30 03:30:52.484 [INFO][6115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif356528ae18 ContainerID="5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" Namespace="calico-system" Pod="calico-kube-controllers-6558588999-s9fnw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0" Apr 30 03:30:52.519412 containerd[1462]: 2025-04-30 03:30:52.491 [INFO][6115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" Namespace="calico-system" Pod="calico-kube-controllers-6558588999-s9fnw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0" Apr 30 03:30:52.519412 containerd[1462]: 2025-04-30 03:30:52.491 [INFO][6115] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" Namespace="calico-system" Pod="calico-kube-controllers-6558588999-s9fnw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0", GenerateName:"calico-kube-controllers-6558588999-", Namespace:"calico-system", SelfLink:"", UID:"9ac779f5-f888-45a0-b421-fdea7435951c", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 30, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6558588999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal", ContainerID:"5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85", Pod:"calico-kube-controllers-6558588999-s9fnw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif356528ae18", MAC:"da:e1:32:42:e1:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:52.519412 containerd[1462]: 2025-04-30 03:30:52.510 [INFO][6115] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85" Namespace="calico-system" Pod="calico-kube-controllers-6558588999-s9fnw" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--6558588999--s9fnw-eth0" Apr 30 03:30:52.570421 containerd[1462]: time="2025-04-30T03:30:52.567661040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:52.570706 containerd[1462]: time="2025-04-30T03:30:52.570432420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:52.570706 containerd[1462]: time="2025-04-30T03:30:52.570465778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:52.571420 containerd[1462]: time="2025-04-30T03:30:52.570731601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:52.624481 systemd[1]: Started cri-containerd-5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85.scope - libcontainer container 5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85. Apr 30 03:30:52.690233 containerd[1462]: time="2025-04-30T03:30:52.690078444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6558588999-s9fnw,Uid:9ac779f5-f888-45a0-b421-fdea7435951c,Namespace:calico-system,Attempt:0,} returns sandbox id \"5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85\"" Apr 30 03:30:52.712455 containerd[1462]: time="2025-04-30T03:30:52.712288875Z" level=info msg="CreateContainer within sandbox \"5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:30:52.738718 containerd[1462]: time="2025-04-30T03:30:52.738590951Z" level=info msg="CreateContainer within sandbox \"5edb770270af16f3c6a6da7fde2f01eeec6002fd4a5c83254430aff627eb7d85\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"242dce07b12cde418a3e4221a3c4f38ce1684e428dc3b51b5ff4631f3dee8b37\"" Apr 30 03:30:52.739427 containerd[1462]: time="2025-04-30T03:30:52.739378801Z" level=info msg="StartContainer for \"242dce07b12cde418a3e4221a3c4f38ce1684e428dc3b51b5ff4631f3dee8b37\"" Apr 30 03:30:52.779235 systemd[1]: Started cri-containerd-242dce07b12cde418a3e4221a3c4f38ce1684e428dc3b51b5ff4631f3dee8b37.scope - libcontainer container 242dce07b12cde418a3e4221a3c4f38ce1684e428dc3b51b5ff4631f3dee8b37. Apr 30 03:30:52.846698 containerd[1462]: time="2025-04-30T03:30:52.846289042Z" level=info msg="StartContainer for \"242dce07b12cde418a3e4221a3c4f38ce1684e428dc3b51b5ff4631f3dee8b37\" returns successfully" Apr 30 03:30:53.847378 kubelet[2602]: I0430 03:30:53.846872 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6558588999-s9fnw" podStartSLOduration=2.846727833 podStartE2EDuration="2.846727833s" podCreationTimestamp="2025-04-30 03:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:30:53.844657972 +0000 UTC m=+85.819299719" watchObservedRunningTime="2025-04-30 03:30:53.846727833 +0000 UTC m=+85.821369581" Apr 30 03:30:54.530413 systemd-networkd[1374]: calif356528ae18: Gained IPv6LL Apr 30 03:30:54.844238 systemd[1]: cri-containerd-0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f.scope: Deactivated successfully. Apr 30 03:30:54.908224 containerd[1462]: time="2025-04-30T03:30:54.908143878Z" level=info msg="shim disconnected" id=0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f namespace=k8s.io Apr 30 03:30:54.908224 containerd[1462]: time="2025-04-30T03:30:54.908221682Z" level=warning msg="cleaning up after shim disconnected" id=0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f namespace=k8s.io Apr 30 03:30:54.909474 containerd[1462]: time="2025-04-30T03:30:54.908235870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:54.918126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f-rootfs.mount: Deactivated successfully. Apr 30 03:30:54.943431 systemd[1]: Started sshd@13-10.128.0.99:22-139.178.68.195:60532.service - OpenSSH per-connection server daemon (139.178.68.195:60532). Apr 30 03:30:54.964927 containerd[1462]: time="2025-04-30T03:30:54.962599150Z" level=info msg="StopContainer for \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\" returns successfully" Apr 30 03:30:54.967457 containerd[1462]: time="2025-04-30T03:30:54.966238881Z" level=info msg="StopPodSandbox for \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\"" Apr 30 03:30:54.967457 containerd[1462]: time="2025-04-30T03:30:54.966324857Z" level=info msg="Container to stop \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:30:54.983643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45-shm.mount: Deactivated successfully. Apr 30 03:30:54.994351 systemd[1]: cri-containerd-c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45.scope: Deactivated successfully. Apr 30 03:30:55.051998 containerd[1462]: time="2025-04-30T03:30:55.051237061Z" level=info msg="shim disconnected" id=c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45 namespace=k8s.io Apr 30 03:30:55.051998 containerd[1462]: time="2025-04-30T03:30:55.051712652Z" level=warning msg="cleaning up after shim disconnected" id=c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45 namespace=k8s.io Apr 30 03:30:55.051998 containerd[1462]: time="2025-04-30T03:30:55.051732360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:55.054563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45-rootfs.mount: Deactivated successfully. Apr 30 03:30:55.086357 containerd[1462]: time="2025-04-30T03:30:55.086285120Z" level=info msg="TearDown network for sandbox \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\" successfully" Apr 30 03:30:55.086357 containerd[1462]: time="2025-04-30T03:30:55.086334724Z" level=info msg="StopPodSandbox for \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\" returns successfully" Apr 30 03:30:55.174001 kubelet[2602]: I0430 03:30:55.173827 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjwjs\" (UniqueName: \"kubernetes.io/projected/8104c962-c19c-42ff-8eb1-2545483a40fe-kube-api-access-hjwjs\") pod \"8104c962-c19c-42ff-8eb1-2545483a40fe\" (UID: \"8104c962-c19c-42ff-8eb1-2545483a40fe\") " Apr 30 03:30:55.174001 kubelet[2602]: I0430 03:30:55.173967 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8104c962-c19c-42ff-8eb1-2545483a40fe-typha-certs\") pod \"8104c962-c19c-42ff-8eb1-2545483a40fe\" (UID: \"8104c962-c19c-42ff-8eb1-2545483a40fe\") " Apr 30 03:30:55.176983 kubelet[2602]: I0430 03:30:55.174591 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8104c962-c19c-42ff-8eb1-2545483a40fe-tigera-ca-bundle\") pod \"8104c962-c19c-42ff-8eb1-2545483a40fe\" (UID: \"8104c962-c19c-42ff-8eb1-2545483a40fe\") " Apr 30 03:30:55.196255 kubelet[2602]: I0430 03:30:55.196162 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8104c962-c19c-42ff-8eb1-2545483a40fe-kube-api-access-hjwjs" (OuterVolumeSpecName: "kube-api-access-hjwjs") pod "8104c962-c19c-42ff-8eb1-2545483a40fe" (UID: "8104c962-c19c-42ff-8eb1-2545483a40fe"). InnerVolumeSpecName "kube-api-access-hjwjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:30:55.196255 kubelet[2602]: I0430 03:30:55.196215 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8104c962-c19c-42ff-8eb1-2545483a40fe-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "8104c962-c19c-42ff-8eb1-2545483a40fe" (UID: "8104c962-c19c-42ff-8eb1-2545483a40fe"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:30:55.202698 systemd[1]: var-lib-kubelet-pods-8104c962\x2dc19c\x2d42ff\x2d8eb1\x2d2545483a40fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhjwjs.mount: Deactivated successfully. Apr 30 03:30:55.202870 systemd[1]: var-lib-kubelet-pods-8104c962\x2dc19c\x2d42ff\x2d8eb1\x2d2545483a40fe-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Apr 30 03:30:55.206539 kubelet[2602]: I0430 03:30:55.206484 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8104c962-c19c-42ff-8eb1-2545483a40fe-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "8104c962-c19c-42ff-8eb1-2545483a40fe" (UID: "8104c962-c19c-42ff-8eb1-2545483a40fe"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:30:55.269375 sshd[6308]: Accepted publickey for core from 139.178.68.195 port 60532 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:30:55.272185 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:55.278162 kubelet[2602]: I0430 03:30:55.277969 2602 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hjwjs\" (UniqueName: \"kubernetes.io/projected/8104c962-c19c-42ff-8eb1-2545483a40fe-kube-api-access-hjwjs\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:30:55.278335 kubelet[2602]: I0430 03:30:55.278175 2602 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8104c962-c19c-42ff-8eb1-2545483a40fe-typha-certs\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:30:55.278778 kubelet[2602]: I0430 03:30:55.278632 2602 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8104c962-c19c-42ff-8eb1-2545483a40fe-tigera-ca-bundle\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:30:55.283219 systemd-logind[1443]: New session 14 of user core. Apr 30 03:30:55.293266 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:30:55.566792 sshd[6308]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:55.572632 systemd[1]: sshd@13-10.128.0.99:22-139.178.68.195:60532.service: Deactivated successfully. Apr 30 03:30:55.578432 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:30:55.581560 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:30:55.583455 systemd-logind[1443]: Removed session 14. Apr 30 03:30:55.839717 kubelet[2602]: I0430 03:30:55.837341 2602 scope.go:117] "RemoveContainer" containerID="0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f" Apr 30 03:30:55.842362 containerd[1462]: time="2025-04-30T03:30:55.842300253Z" level=info msg="RemoveContainer for \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\"" Apr 30 03:30:55.848351 containerd[1462]: time="2025-04-30T03:30:55.848303343Z" level=info msg="RemoveContainer for \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\" returns successfully" Apr 30 03:30:55.848686 systemd[1]: Removed slice kubepods-besteffort-pod8104c962_c19c_42ff_8eb1_2545483a40fe.slice - libcontainer container kubepods-besteffort-pod8104c962_c19c_42ff_8eb1_2545483a40fe.slice. Apr 30 03:30:55.849350 kubelet[2602]: I0430 03:30:55.848982 2602 scope.go:117] "RemoveContainer" containerID="0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f" Apr 30 03:30:55.849647 containerd[1462]: time="2025-04-30T03:30:55.849600219Z" level=error msg="ContainerStatus for \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\": not found" Apr 30 03:30:55.850795 kubelet[2602]: E0430 03:30:55.850679 2602 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\": not found" containerID="0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f" Apr 30 03:30:55.850795 kubelet[2602]: I0430 03:30:55.850725 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f"} err="failed to get container status \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0320516160119b518b80994421c225d422b3bbb50a21ac30ca4af4ed5241fa7f\": not found" Apr 30 03:30:55.874027 systemd[1]: var-lib-kubelet-pods-8104c962\x2dc19c\x2d42ff\x2d8eb1\x2d2545483a40fe-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Apr 30 03:30:56.192305 kubelet[2602]: I0430 03:30:56.192150 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8104c962-c19c-42ff-8eb1-2545483a40fe" path="/var/lib/kubelet/pods/8104c962-c19c-42ff-8eb1-2545483a40fe/volumes" Apr 30 03:30:56.553645 ntpd[1431]: Listen normally on 17 calif356528ae18 [fe80::ecee:eeff:feee:eeee%15]:123 Apr 30 03:30:56.553729 ntpd[1431]: Deleting interface #15 calif965e1a30d6, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=37 secs Apr 30 03:30:56.554248 ntpd[1431]: 30 Apr 03:30:56 ntpd[1431]: Listen normally on 17 calif356528ae18 [fe80::ecee:eeff:feee:eeee%15]:123 Apr 30 03:30:56.554248 ntpd[1431]: 30 Apr 03:30:56 ntpd[1431]: Deleting interface #15 calif965e1a30d6, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=37 secs Apr 30 03:30:59.749243 containerd[1462]: time="2025-04-30T03:30:59.749184643Z" level=info msg="StopContainer for \"29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9\" with timeout 5 (s)" Apr 30 03:30:59.749999 containerd[1462]: time="2025-04-30T03:30:59.749732130Z" level=info msg="Stop container \"29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9\" with signal terminated" Apr 30 03:30:59.771244 systemd[1]: cri-containerd-29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9.scope: Deactivated successfully. Apr 30 03:30:59.772053 systemd[1]: cri-containerd-29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9.scope: Consumed 6.008s CPU time. Apr 30 03:30:59.806513 containerd[1462]: time="2025-04-30T03:30:59.806233148Z" level=info msg="shim disconnected" id=29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9 namespace=k8s.io Apr 30 03:30:59.806513 containerd[1462]: time="2025-04-30T03:30:59.806471506Z" level=warning msg="cleaning up after shim disconnected" id=29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9 namespace=k8s.io Apr 30 03:30:59.807176 containerd[1462]: time="2025-04-30T03:30:59.806766117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:59.810670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9-rootfs.mount: Deactivated successfully. Apr 30 03:30:59.842916 containerd[1462]: time="2025-04-30T03:30:59.842799190Z" level=info msg="StopContainer for \"29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9\" returns successfully" Apr 30 03:30:59.843641 containerd[1462]: time="2025-04-30T03:30:59.843529971Z" level=info msg="StopPodSandbox for \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\"" Apr 30 03:30:59.843641 containerd[1462]: time="2025-04-30T03:30:59.843596236Z" level=info msg="Container to stop \"64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:30:59.843641 containerd[1462]: time="2025-04-30T03:30:59.843622891Z" level=info msg="Container to stop \"f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:30:59.843641 containerd[1462]: time="2025-04-30T03:30:59.843641099Z" level=info msg="Container to stop \"29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:30:59.851364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91-shm.mount: Deactivated successfully. Apr 30 03:30:59.860705 systemd[1]: cri-containerd-74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91.scope: Deactivated successfully. Apr 30 03:30:59.896496 containerd[1462]: time="2025-04-30T03:30:59.894241129Z" level=info msg="shim disconnected" id=74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91 namespace=k8s.io Apr 30 03:30:59.896496 containerd[1462]: time="2025-04-30T03:30:59.894376375Z" level=warning msg="cleaning up after shim disconnected" id=74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91 namespace=k8s.io Apr 30 03:30:59.896496 containerd[1462]: time="2025-04-30T03:30:59.894393560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:59.907156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91-rootfs.mount: Deactivated successfully. Apr 30 03:30:59.924805 containerd[1462]: time="2025-04-30T03:30:59.924748330Z" level=info msg="TearDown network for sandbox \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" successfully" Apr 30 03:30:59.924805 containerd[1462]: time="2025-04-30T03:30:59.924795739Z" level=info msg="StopPodSandbox for \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" returns successfully" Apr 30 03:30:59.973714 kubelet[2602]: I0430 03:30:59.973638 2602 topology_manager.go:215] "Topology Admit Handler" podUID="bee580c0-0d69-43c2-bb2b-93f0f9076094" podNamespace="calico-system" podName="calico-node-d8lv6" Apr 30 03:30:59.974424 kubelet[2602]: E0430 03:30:59.973919 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fe07556-52b9-47e3-914d-856a747fb4e0" containerName="flexvol-driver" Apr 30 03:30:59.974424 kubelet[2602]: E0430 03:30:59.973945 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8104c962-c19c-42ff-8eb1-2545483a40fe" containerName="calico-typha" Apr 30 03:30:59.974424 kubelet[2602]: E0430 03:30:59.973958 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fe07556-52b9-47e3-914d-856a747fb4e0" containerName="install-cni" Apr 30 03:30:59.974424 kubelet[2602]: E0430 03:30:59.974086 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fe07556-52b9-47e3-914d-856a747fb4e0" containerName="calico-node" Apr 30 03:30:59.974424 kubelet[2602]: I0430 03:30:59.974171 2602 memory_manager.go:354] "RemoveStaleState removing state" podUID="8104c962-c19c-42ff-8eb1-2545483a40fe" containerName="calico-typha" Apr 30 03:30:59.974424 kubelet[2602]: I0430 03:30:59.974187 2602 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe07556-52b9-47e3-914d-856a747fb4e0" containerName="calico-node" Apr 30 03:30:59.993841 systemd[1]: Created slice kubepods-besteffort-podbee580c0_0d69_43c2_bb2b_93f0f9076094.slice - libcontainer container kubepods-besteffort-podbee580c0_0d69_43c2_bb2b_93f0f9076094.slice. Apr 30 03:31:00.015878 kubelet[2602]: I0430 03:31:00.015704 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs5xg\" (UniqueName: \"kubernetes.io/projected/0fe07556-52b9-47e3-914d-856a747fb4e0-kube-api-access-cs5xg\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.015878 kubelet[2602]: I0430 03:31:00.015758 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-bin-dir\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.015878 kubelet[2602]: I0430 03:31:00.015786 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-lib-modules\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.015878 kubelet[2602]: I0430 03:31:00.015810 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-policysync\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.015878 kubelet[2602]: I0430 03:31:00.015839 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-var-run-calico\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.015878 kubelet[2602]: I0430 03:31:00.015862 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-net-dir\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.016423 kubelet[2602]: I0430 03:31:00.015900 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-var-lib-calico\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.016423 kubelet[2602]: I0430 03:31:00.015926 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-flexvol-driver-host\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.016423 kubelet[2602]: I0430 03:31:00.015951 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-log-dir\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.016423 kubelet[2602]: I0430 03:31:00.015980 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fe07556-52b9-47e3-914d-856a747fb4e0-tigera-ca-bundle\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.016423 kubelet[2602]: I0430 03:31:00.016006 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-xtables-lock\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.016423 kubelet[2602]: I0430 03:31:00.016037 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0fe07556-52b9-47e3-914d-856a747fb4e0-node-certs\") pod \"0fe07556-52b9-47e3-914d-856a747fb4e0\" (UID: \"0fe07556-52b9-47e3-914d-856a747fb4e0\") " Apr 30 03:31:00.016763 kubelet[2602]: I0430 03:31:00.016101 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bee580c0-0d69-43c2-bb2b-93f0f9076094-cni-bin-dir\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.016763 kubelet[2602]: I0430 03:31:00.016143 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bee580c0-0d69-43c2-bb2b-93f0f9076094-flexvol-driver-host\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.016763 kubelet[2602]: I0430 03:31:00.016194 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bee580c0-0d69-43c2-bb2b-93f0f9076094-lib-modules\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.016763 kubelet[2602]: I0430 03:31:00.016229 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bee580c0-0d69-43c2-bb2b-93f0f9076094-cni-log-dir\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.016763 kubelet[2602]: I0430 03:31:00.016259 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bee580c0-0d69-43c2-bb2b-93f0f9076094-tigera-ca-bundle\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.017572 kubelet[2602]: I0430 03:31:00.016288 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bee580c0-0d69-43c2-bb2b-93f0f9076094-var-lib-calico\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.017572 kubelet[2602]: I0430 03:31:00.016319 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffr5c\" (UniqueName: \"kubernetes.io/projected/bee580c0-0d69-43c2-bb2b-93f0f9076094-kube-api-access-ffr5c\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.017572 kubelet[2602]: I0430 03:31:00.016350 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bee580c0-0d69-43c2-bb2b-93f0f9076094-policysync\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.017572 kubelet[2602]: I0430 03:31:00.016383 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bee580c0-0d69-43c2-bb2b-93f0f9076094-node-certs\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.017572 kubelet[2602]: I0430 03:31:00.016411 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bee580c0-0d69-43c2-bb2b-93f0f9076094-cni-net-dir\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.017880 kubelet[2602]: I0430 03:31:00.016441 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bee580c0-0d69-43c2-bb2b-93f0f9076094-xtables-lock\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.017880 kubelet[2602]: I0430 03:31:00.016469 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bee580c0-0d69-43c2-bb2b-93f0f9076094-var-run-calico\") pod \"calico-node-d8lv6\" (UID: \"bee580c0-0d69-43c2-bb2b-93f0f9076094\") " pod="calico-system/calico-node-d8lv6" Apr 30 03:31:00.025067 kubelet[2602]: I0430 03:31:00.024684 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:00.027120 kubelet[2602]: I0430 03:31:00.025610 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:00.027120 kubelet[2602]: I0430 03:31:00.025669 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-policysync" (OuterVolumeSpecName: "policysync") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:00.027120 kubelet[2602]: I0430 03:31:00.025697 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:00.027120 kubelet[2602]: I0430 03:31:00.025725 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:00.027120 kubelet[2602]: I0430 03:31:00.025756 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:00.027496 kubelet[2602]: I0430 03:31:00.025783 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:00.027496 kubelet[2602]: I0430 03:31:00.025812 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:00.029045 systemd[1]: var-lib-kubelet-pods-0fe07556\x2d52b9\x2d47e3\x2d914d\x2d856a747fb4e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcs5xg.mount: Deactivated successfully. Apr 30 03:31:00.035578 kubelet[2602]: I0430 03:31:00.034271 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe07556-52b9-47e3-914d-856a747fb4e0-kube-api-access-cs5xg" (OuterVolumeSpecName: "kube-api-access-cs5xg") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "kube-api-access-cs5xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:31:00.035854 kubelet[2602]: I0430 03:31:00.035810 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:00.036145 kubelet[2602]: I0430 03:31:00.036117 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe07556-52b9-47e3-914d-856a747fb4e0-node-certs" (OuterVolumeSpecName: "node-certs") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:31:00.038693 kubelet[2602]: I0430 03:31:00.038653 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fe07556-52b9-47e3-914d-856a747fb4e0-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "0fe07556-52b9-47e3-914d-856a747fb4e0" (UID: "0fe07556-52b9-47e3-914d-856a747fb4e0"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:31:00.119087 kubelet[2602]: I0430 03:31:00.117762 2602 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-policysync\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.119087 kubelet[2602]: I0430 03:31:00.117818 2602 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-var-run-calico\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.119087 kubelet[2602]: I0430 03:31:00.117837 2602 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-log-dir\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.119087 kubelet[2602]: I0430 03:31:00.117931 2602 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-xtables-lock\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.119087 kubelet[2602]: I0430 03:31:00.117953 2602 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cs5xg\" (UniqueName: \"kubernetes.io/projected/0fe07556-52b9-47e3-914d-856a747fb4e0-kube-api-access-cs5xg\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.119087 kubelet[2602]: I0430 03:31:00.117973 2602 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-bin-dir\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.119087 kubelet[2602]: I0430 03:31:00.117990 2602 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-lib-modules\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.119632 kubelet[2602]: I0430 03:31:00.118005 2602 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fe07556-52b9-47e3-914d-856a747fb4e0-tigera-ca-bundle\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.119632 kubelet[2602]: I0430 03:31:00.118036 2602 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-cni-net-dir\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.119632 kubelet[2602]: I0430 03:31:00.118051 2602 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-var-lib-calico\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.119632 kubelet[2602]: I0430 03:31:00.118068 2602 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0fe07556-52b9-47e3-914d-856a747fb4e0-flexvol-driver-host\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.119632 kubelet[2602]: I0430 03:31:00.118084 2602 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0fe07556-52b9-47e3-914d-856a747fb4e0-node-certs\") on node \"ci-4081-3-3-ca406f5d8fb8a4efa166.c.flatcar-212911.internal\" DevicePath \"\"" Apr 30 03:31:00.197254 systemd[1]: Removed slice kubepods-besteffort-pod0fe07556_52b9_47e3_914d_856a747fb4e0.slice - libcontainer container kubepods-besteffort-pod0fe07556_52b9_47e3_914d_856a747fb4e0.slice. Apr 30 03:31:00.197489 systemd[1]: kubepods-besteffort-pod0fe07556_52b9_47e3_914d_856a747fb4e0.slice: Consumed 6.750s CPU time. Apr 30 03:31:00.300164 containerd[1462]: time="2025-04-30T03:31:00.299818528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d8lv6,Uid:bee580c0-0d69-43c2-bb2b-93f0f9076094,Namespace:calico-system,Attempt:0,}" Apr 30 03:31:00.331261 containerd[1462]: time="2025-04-30T03:31:00.330670167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:00.331261 containerd[1462]: time="2025-04-30T03:31:00.330850899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:00.331261 containerd[1462]: time="2025-04-30T03:31:00.331074071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:00.332566 containerd[1462]: time="2025-04-30T03:31:00.332444562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:00.359147 systemd[1]: Started cri-containerd-37dd7538c49fa22183d76f32512b375d9b6fe12fc00ded756472188209ad2720.scope - libcontainer container 37dd7538c49fa22183d76f32512b375d9b6fe12fc00ded756472188209ad2720. Apr 30 03:31:00.393791 containerd[1462]: time="2025-04-30T03:31:00.393734417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d8lv6,Uid:bee580c0-0d69-43c2-bb2b-93f0f9076094,Namespace:calico-system,Attempt:0,} returns sandbox id \"37dd7538c49fa22183d76f32512b375d9b6fe12fc00ded756472188209ad2720\"" Apr 30 03:31:00.397934 containerd[1462]: time="2025-04-30T03:31:00.397763426Z" level=info msg="CreateContainer within sandbox \"37dd7538c49fa22183d76f32512b375d9b6fe12fc00ded756472188209ad2720\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:31:00.416956 containerd[1462]: time="2025-04-30T03:31:00.416864941Z" level=info msg="CreateContainer within sandbox \"37dd7538c49fa22183d76f32512b375d9b6fe12fc00ded756472188209ad2720\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ba69660d0095ca37e1502b890ea7698777c368fd6569129584f3a0c2cdb5fecd\"" Apr 30 03:31:00.418660 containerd[1462]: time="2025-04-30T03:31:00.417989074Z" level=info msg="StartContainer for \"ba69660d0095ca37e1502b890ea7698777c368fd6569129584f3a0c2cdb5fecd\"" Apr 30 03:31:00.458228 systemd[1]: Started cri-containerd-ba69660d0095ca37e1502b890ea7698777c368fd6569129584f3a0c2cdb5fecd.scope - libcontainer container ba69660d0095ca37e1502b890ea7698777c368fd6569129584f3a0c2cdb5fecd. Apr 30 03:31:00.499339 containerd[1462]: time="2025-04-30T03:31:00.499246687Z" level=info msg="StartContainer for \"ba69660d0095ca37e1502b890ea7698777c368fd6569129584f3a0c2cdb5fecd\" returns successfully" Apr 30 03:31:00.518294 systemd[1]: cri-containerd-ba69660d0095ca37e1502b890ea7698777c368fd6569129584f3a0c2cdb5fecd.scope: Deactivated successfully. Apr 30 03:31:00.559269 containerd[1462]: time="2025-04-30T03:31:00.559103872Z" level=info msg="shim disconnected" id=ba69660d0095ca37e1502b890ea7698777c368fd6569129584f3a0c2cdb5fecd namespace=k8s.io Apr 30 03:31:00.559879 containerd[1462]: time="2025-04-30T03:31:00.559570668Z" level=warning msg="cleaning up after shim disconnected" id=ba69660d0095ca37e1502b890ea7698777c368fd6569129584f3a0c2cdb5fecd namespace=k8s.io Apr 30 03:31:00.559879 containerd[1462]: time="2025-04-30T03:31:00.559617667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:00.622411 systemd[1]: Started sshd@14-10.128.0.99:22-139.178.68.195:52616.service - OpenSSH per-connection server daemon (139.178.68.195:52616). Apr 30 03:31:00.662474 systemd[1]: var-lib-kubelet-pods-0fe07556\x2d52b9\x2d47e3\x2d914d\x2d856a747fb4e0-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Apr 30 03:31:00.662643 systemd[1]: var-lib-kubelet-pods-0fe07556\x2d52b9\x2d47e3\x2d914d\x2d856a747fb4e0-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Apr 30 03:31:00.871659 containerd[1462]: time="2025-04-30T03:31:00.870074431Z" level=info msg="CreateContainer within sandbox \"37dd7538c49fa22183d76f32512b375d9b6fe12fc00ded756472188209ad2720\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:31:00.874562 kubelet[2602]: I0430 03:31:00.873201 2602 scope.go:117] "RemoveContainer" containerID="29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9" Apr 30 03:31:00.878830 containerd[1462]: time="2025-04-30T03:31:00.878727506Z" level=info msg="RemoveContainer for \"29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9\"" Apr 30 03:31:00.890539 containerd[1462]: time="2025-04-30T03:31:00.890474258Z" level=info msg="RemoveContainer for \"29e059b229afb503487bc03e1720ba1f9ac18c196c5b1ffc9518a30083d2d0f9\" returns successfully" Apr 30 03:31:00.890831 kubelet[2602]: I0430 03:31:00.890798 2602 scope.go:117] "RemoveContainer" containerID="f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1" Apr 30 03:31:00.893035 containerd[1462]: time="2025-04-30T03:31:00.892872078Z" level=info msg="RemoveContainer for \"f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1\"" Apr 30 03:31:00.914237 containerd[1462]: time="2025-04-30T03:31:00.913377121Z" level=info msg="RemoveContainer for \"f089e05fa789557b1a3c9b1ff268f967079c76a017b53552a5a570857017d0a1\" returns successfully" Apr 30 03:31:00.916988 kubelet[2602]: I0430 03:31:00.916932 2602 scope.go:117] "RemoveContainer" containerID="64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e" Apr 30 03:31:00.921843 containerd[1462]: time="2025-04-30T03:31:00.920423895Z" level=info msg="CreateContainer within sandbox \"37dd7538c49fa22183d76f32512b375d9b6fe12fc00ded756472188209ad2720\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e24714090d512908b019e5d983d168750a4bc90fe00b30b3b441b4e41f1065e6\"" Apr 30 03:31:00.924032 sshd[6667]: Accepted publickey for core from 139.178.68.195 port 52616 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:31:00.925771 containerd[1462]: time="2025-04-30T03:31:00.925482521Z" level=info msg="RemoveContainer for \"64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e\"" Apr 30 03:31:00.926858 containerd[1462]: time="2025-04-30T03:31:00.926801502Z" level=info msg="StartContainer for \"e24714090d512908b019e5d983d168750a4bc90fe00b30b3b441b4e41f1065e6\"" Apr 30 03:31:00.934584 sshd[6667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:00.939923 containerd[1462]: time="2025-04-30T03:31:00.939824744Z" level=info msg="RemoveContainer for \"64765fd16bac527da4be2a6d2eed426571826d68302efcb87475d0f6b210c52e\" returns successfully" Apr 30 03:31:00.953246 systemd-logind[1443]: New session 15 of user core. Apr 30 03:31:00.958493 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:31:01.059588 systemd[1]: Started cri-containerd-e24714090d512908b019e5d983d168750a4bc90fe00b30b3b441b4e41f1065e6.scope - libcontainer container e24714090d512908b019e5d983d168750a4bc90fe00b30b3b441b4e41f1065e6. Apr 30 03:31:01.225652 containerd[1462]: time="2025-04-30T03:31:01.225497614Z" level=info msg="StartContainer for \"e24714090d512908b019e5d983d168750a4bc90fe00b30b3b441b4e41f1065e6\" returns successfully" Apr 30 03:31:01.319251 sshd[6667]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:01.328320 systemd[1]: sshd@14-10.128.0.99:22-139.178.68.195:52616.service: Deactivated successfully. Apr 30 03:31:01.333629 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:31:01.337747 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:31:01.340294 systemd-logind[1443]: Removed session 15. Apr 30 03:31:01.763389 systemd[1]: cri-containerd-e24714090d512908b019e5d983d168750a4bc90fe00b30b3b441b4e41f1065e6.scope: Deactivated successfully. Apr 30 03:31:01.796731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e24714090d512908b019e5d983d168750a4bc90fe00b30b3b441b4e41f1065e6-rootfs.mount: Deactivated successfully. Apr 30 03:31:01.801732 containerd[1462]: time="2025-04-30T03:31:01.801656504Z" level=info msg="shim disconnected" id=e24714090d512908b019e5d983d168750a4bc90fe00b30b3b441b4e41f1065e6 namespace=k8s.io Apr 30 03:31:01.802001 containerd[1462]: time="2025-04-30T03:31:01.801792533Z" level=warning msg="cleaning up after shim disconnected" id=e24714090d512908b019e5d983d168750a4bc90fe00b30b3b441b4e41f1065e6 namespace=k8s.io Apr 30 03:31:01.802001 containerd[1462]: time="2025-04-30T03:31:01.801813878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:01.905336 containerd[1462]: time="2025-04-30T03:31:01.905265561Z" level=info msg="CreateContainer within sandbox \"37dd7538c49fa22183d76f32512b375d9b6fe12fc00ded756472188209ad2720\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:31:01.945176 containerd[1462]: time="2025-04-30T03:31:01.944917413Z" level=info msg="CreateContainer within sandbox \"37dd7538c49fa22183d76f32512b375d9b6fe12fc00ded756472188209ad2720\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7e975cc465493508f05fa41b2b325a46aeeaa5cec97b5cfd9a25a51dccc80211\"" Apr 30 03:31:01.946191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142465351.mount: Deactivated successfully. Apr 30 03:31:01.950787 containerd[1462]: time="2025-04-30T03:31:01.950741547Z" level=info msg="StartContainer for \"7e975cc465493508f05fa41b2b325a46aeeaa5cec97b5cfd9a25a51dccc80211\"" Apr 30 03:31:01.990186 systemd[1]: Started cri-containerd-7e975cc465493508f05fa41b2b325a46aeeaa5cec97b5cfd9a25a51dccc80211.scope - libcontainer container 7e975cc465493508f05fa41b2b325a46aeeaa5cec97b5cfd9a25a51dccc80211. Apr 30 03:31:02.034450 containerd[1462]: time="2025-04-30T03:31:02.034219575Z" level=info msg="StartContainer for \"7e975cc465493508f05fa41b2b325a46aeeaa5cec97b5cfd9a25a51dccc80211\" returns successfully" Apr 30 03:31:02.196520 kubelet[2602]: I0430 03:31:02.196391 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fe07556-52b9-47e3-914d-856a747fb4e0" path="/var/lib/kubelet/pods/0fe07556-52b9-47e3-914d-856a747fb4e0/volumes" Apr 30 03:31:02.931173 kubelet[2602]: I0430 03:31:02.931073 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d8lv6" podStartSLOduration=3.931033412 podStartE2EDuration="3.931033412s" podCreationTimestamp="2025-04-30 03:30:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:02.921470297 +0000 UTC m=+94.896112069" watchObservedRunningTime="2025-04-30 03:31:02.931033412 +0000 UTC m=+94.905675159" Apr 30 03:31:06.379360 systemd[1]: Started sshd@15-10.128.0.99:22-139.178.68.195:48112.service - OpenSSH per-connection server daemon (139.178.68.195:48112). Apr 30 03:31:06.670958 sshd[7045]: Accepted publickey for core from 139.178.68.195 port 48112 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:31:06.673075 sshd[7045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:06.680208 systemd-logind[1443]: New session 16 of user core. Apr 30 03:31:06.690252 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:31:06.968733 sshd[7045]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:06.974192 systemd[1]: sshd@15-10.128.0.99:22-139.178.68.195:48112.service: Deactivated successfully. Apr 30 03:31:06.977693 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:31:06.980291 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:31:06.982070 systemd-logind[1443]: Removed session 16. Apr 30 03:31:07.028376 systemd[1]: Started sshd@16-10.128.0.99:22-139.178.68.195:48126.service - OpenSSH per-connection server daemon (139.178.68.195:48126). Apr 30 03:31:07.321058 sshd[7057]: Accepted publickey for core from 139.178.68.195 port 48126 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:31:07.323053 sshd[7057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:07.330229 systemd-logind[1443]: New session 17 of user core. Apr 30 03:31:07.337369 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:31:07.701470 sshd[7057]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:07.707790 systemd[1]: sshd@16-10.128.0.99:22-139.178.68.195:48126.service: Deactivated successfully. Apr 30 03:31:07.710508 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:31:07.711627 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:31:07.713499 systemd-logind[1443]: Removed session 17. Apr 30 03:31:07.757390 systemd[1]: Started sshd@17-10.128.0.99:22-139.178.68.195:48142.service - OpenSSH per-connection server daemon (139.178.68.195:48142). Apr 30 03:31:08.046801 sshd[7068]: Accepted publickey for core from 139.178.68.195 port 48142 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:31:08.048802 sshd[7068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:08.055456 systemd-logind[1443]: New session 18 of user core. Apr 30 03:31:08.061179 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:31:10.268673 sshd[7068]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:10.277352 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:31:10.278744 systemd[1]: sshd@17-10.128.0.99:22-139.178.68.195:48142.service: Deactivated successfully. Apr 30 03:31:10.282961 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:31:10.285441 systemd-logind[1443]: Removed session 18. Apr 30 03:31:10.326322 systemd[1]: Started sshd@18-10.128.0.99:22-139.178.68.195:48146.service - OpenSSH per-connection server daemon (139.178.68.195:48146). Apr 30 03:31:10.609919 sshd[7095]: Accepted publickey for core from 139.178.68.195 port 48146 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:31:10.611982 sshd[7095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:10.621965 systemd-logind[1443]: New session 19 of user core. Apr 30 03:31:10.628467 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:31:11.049108 sshd[7095]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:11.055323 systemd[1]: sshd@18-10.128.0.99:22-139.178.68.195:48146.service: Deactivated successfully. Apr 30 03:31:11.058703 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:31:11.060476 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:31:11.062795 systemd-logind[1443]: Removed session 19. Apr 30 03:31:11.108402 systemd[1]: Started sshd@19-10.128.0.99:22-139.178.68.195:48162.service - OpenSSH per-connection server daemon (139.178.68.195:48162). Apr 30 03:31:11.396681 sshd[7106]: Accepted publickey for core from 139.178.68.195 port 48162 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:31:11.398690 sshd[7106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:11.405506 systemd-logind[1443]: New session 20 of user core. Apr 30 03:31:11.411162 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:31:11.689644 sshd[7106]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:11.695181 systemd[1]: sshd@19-10.128.0.99:22-139.178.68.195:48162.service: Deactivated successfully. Apr 30 03:31:11.698504 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:31:11.700815 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:31:11.702553 systemd-logind[1443]: Removed session 20. Apr 30 03:31:16.751401 systemd[1]: Started sshd@20-10.128.0.99:22-139.178.68.195:40070.service - OpenSSH per-connection server daemon (139.178.68.195:40070). Apr 30 03:31:17.032261 sshd[7133]: Accepted publickey for core from 139.178.68.195 port 40070 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:31:17.034480 sshd[7133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:17.042013 systemd-logind[1443]: New session 21 of user core. Apr 30 03:31:17.052154 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:31:17.319949 sshd[7133]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:17.325232 systemd[1]: sshd@20-10.128.0.99:22-139.178.68.195:40070.service: Deactivated successfully. Apr 30 03:31:17.327756 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:31:17.330096 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:31:17.332309 systemd-logind[1443]: Removed session 21. Apr 30 03:31:22.236837 systemd[1]: run-containerd-runc-k8s.io-242dce07b12cde418a3e4221a3c4f38ce1684e428dc3b51b5ff4631f3dee8b37-runc.TA0qob.mount: Deactivated successfully. Apr 30 03:31:22.377381 systemd[1]: Started sshd@21-10.128.0.99:22-139.178.68.195:40080.service - OpenSSH per-connection server daemon (139.178.68.195:40080). Apr 30 03:31:22.668364 sshd[7165]: Accepted publickey for core from 139.178.68.195 port 40080 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:31:22.670455 sshd[7165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:22.676658 systemd-logind[1443]: New session 22 of user core. Apr 30 03:31:22.683196 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:31:22.961574 sshd[7165]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:22.966833 systemd[1]: sshd@21-10.128.0.99:22-139.178.68.195:40080.service: Deactivated successfully. Apr 30 03:31:22.970172 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:31:22.972454 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:31:22.974611 systemd-logind[1443]: Removed session 22. Apr 30 03:31:23.027967 update_engine[1447]: I20250430 03:31:23.026578 1447 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 03:31:23.027967 update_engine[1447]: I20250430 03:31:23.026661 1447 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 03:31:23.027967 update_engine[1447]: I20250430 03:31:23.027130 1447 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 03:31:23.027967 update_engine[1447]: I20250430 03:31:23.027839 1447 omaha_request_params.cc:62] Current group set to lts Apr 30 03:31:23.028698 update_engine[1447]: I20250430 03:31:23.028028 1447 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 03:31:23.028698 update_engine[1447]: I20250430 03:31:23.028049 1447 update_attempter.cc:643] Scheduling an action processor start. Apr 30 03:31:23.028698 update_engine[1447]: I20250430 03:31:23.028076 1447 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 03:31:23.028698 update_engine[1447]: I20250430 03:31:23.028127 1447 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 03:31:23.028698 update_engine[1447]: I20250430 03:31:23.028234 1447 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 03:31:23.028698 update_engine[1447]: I20250430 03:31:23.028249 1447 omaha_request_action.cc:272] Request: Apr 30 03:31:23.028698 update_engine[1447]: Apr 30 03:31:23.028698 update_engine[1447]: Apr 30 03:31:23.028698 update_engine[1447]: Apr 30 03:31:23.028698 update_engine[1447]: Apr 30 03:31:23.028698 update_engine[1447]: Apr 30 03:31:23.028698 update_engine[1447]: Apr 30 03:31:23.028698 update_engine[1447]: Apr 30 03:31:23.028698 update_engine[1447]: Apr 30 03:31:23.028698 update_engine[1447]: I20250430 03:31:23.028262 1447 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:31:23.032583 update_engine[1447]: I20250430 03:31:23.031104 1447 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:31:23.032583 update_engine[1447]: I20250430 03:31:23.031860 1447 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:31:23.033031 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 03:31:23.090961 update_engine[1447]: E20250430 03:31:23.090759 1447 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:31:23.091324 update_engine[1447]: I20250430 03:31:23.091176 1447 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 03:31:28.019401 systemd[1]: Started sshd@22-10.128.0.99:22-139.178.68.195:34212.service - OpenSSH per-connection server daemon (139.178.68.195:34212). Apr 30 03:31:28.308792 sshd[7186]: Accepted publickey for core from 139.178.68.195 port 34212 ssh2: RSA SHA256:SMHEK+zhppjatNeMuFLI1UJrqR+mrZX+szs1RBpuwD0 Apr 30 03:31:28.310569 sshd[7186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:28.317404 systemd-logind[1443]: New session 23 of user core. Apr 30 03:31:28.324128 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:31:28.604259 sshd[7186]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:28.609541 systemd[1]: sshd@22-10.128.0.99:22-139.178.68.195:34212.service: Deactivated successfully. Apr 30 03:31:28.612454 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:31:28.615336 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:31:28.617210 systemd-logind[1443]: Removed session 23. Apr 30 03:31:29.676228 kubelet[2602]: I0430 03:31:29.676181 2602 scope.go:117] "RemoveContainer" containerID="8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49" Apr 30 03:31:29.678027 containerd[1462]: time="2025-04-30T03:31:29.677979820Z" level=info msg="RemoveContainer for \"8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49\"" Apr 30 03:31:29.683492 containerd[1462]: time="2025-04-30T03:31:29.683437116Z" level=info msg="RemoveContainer for \"8b06a7915c9bc84102b8d6edf9e46a349fca410e2302736a362e63503f76ef49\" returns successfully" Apr 30 03:31:29.685460 containerd[1462]: time="2025-04-30T03:31:29.685309871Z" level=info msg="StopPodSandbox for \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\"" Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.732 [WARNING][7212] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.732 [INFO][7212] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.732 [INFO][7212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" iface="eth0" netns="" Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.732 [INFO][7212] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.732 [INFO][7212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.758 [INFO][7219] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.758 [INFO][7219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.758 [INFO][7219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.766 [WARNING][7219] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.766 [INFO][7219] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.769 [INFO][7219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:31:29.771678 containerd[1462]: 2025-04-30 03:31:29.770 [INFO][7212] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:31:29.771678 containerd[1462]: time="2025-04-30T03:31:29.771608226Z" level=info msg="TearDown network for sandbox \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\" successfully" Apr 30 03:31:29.771678 containerd[1462]: time="2025-04-30T03:31:29.771645317Z" level=info msg="StopPodSandbox for \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\" returns successfully" Apr 30 03:31:29.773317 containerd[1462]: time="2025-04-30T03:31:29.772373278Z" level=info msg="RemovePodSandbox for \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\"" Apr 30 03:31:29.773317 containerd[1462]: time="2025-04-30T03:31:29.772414570Z" level=info msg="Forcibly stopping sandbox \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\"" Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.817 [WARNING][7237] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.817 [INFO][7237] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.817 [INFO][7237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" iface="eth0" netns="" Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.817 [INFO][7237] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.817 [INFO][7237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.844 [INFO][7244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.844 [INFO][7244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.844 [INFO][7244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.852 [WARNING][7244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.852 [INFO][7244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" HandleID="k8s-pod-network.9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--kube--controllers--64db747896--slftr-eth0" Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.855 [INFO][7244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:31:29.858738 containerd[1462]: 2025-04-30 03:31:29.857 [INFO][7237] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c" Apr 30 03:31:29.859679 containerd[1462]: time="2025-04-30T03:31:29.858795380Z" level=info msg="TearDown network for sandbox \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\" successfully" Apr 30 03:31:29.864185 containerd[1462]: time="2025-04-30T03:31:29.864120808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:31:29.864359 containerd[1462]: time="2025-04-30T03:31:29.864219466Z" level=info msg="RemovePodSandbox \"9da304be760c51d869d80f648b69b2385845567b00d3de62fa6fa02b3b96636c\" returns successfully" Apr 30 03:31:29.864924 containerd[1462]: time="2025-04-30T03:31:29.864859232Z" level=info msg="StopPodSandbox for \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\"" Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.921 [WARNING][7262] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.921 [INFO][7262] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.921 [INFO][7262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" iface="eth0" netns="" Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.921 [INFO][7262] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.921 [INFO][7262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.947 [INFO][7270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.948 [INFO][7270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.948 [INFO][7270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.957 [WARNING][7270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.957 [INFO][7270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.958 [INFO][7270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:31:29.961454 containerd[1462]: 2025-04-30 03:31:29.960 [INFO][7262] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:31:29.961454 containerd[1462]: time="2025-04-30T03:31:29.961396529Z" level=info msg="TearDown network for sandbox \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\" successfully" Apr 30 03:31:29.961454 containerd[1462]: time="2025-04-30T03:31:29.961453253Z" level=info msg="StopPodSandbox for \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\" returns successfully" Apr 30 03:31:29.964124 containerd[1462]: time="2025-04-30T03:31:29.962583130Z" level=info msg="RemovePodSandbox for \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\"" Apr 30 03:31:29.964124 containerd[1462]: time="2025-04-30T03:31:29.962642313Z" level=info msg="Forcibly stopping sandbox \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\"" Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.011 [WARNING][7288] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.011 [INFO][7288] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.011 [INFO][7288] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" iface="eth0" netns="" Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.011 [INFO][7288] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.011 [INFO][7288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.040 [INFO][7295] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.040 [INFO][7295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.040 [INFO][7295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.047 [WARNING][7295] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.047 [INFO][7295] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" HandleID="k8s-pod-network.bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--5qtrf-eth0" Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.049 [INFO][7295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:31:30.052906 containerd[1462]: 2025-04-30 03:31:30.050 [INFO][7288] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37" Apr 30 03:31:30.052906 containerd[1462]: time="2025-04-30T03:31:30.052333146Z" level=info msg="TearDown network for sandbox \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\" successfully" Apr 30 03:31:30.057957 containerd[1462]: time="2025-04-30T03:31:30.057865657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:31:30.058146 containerd[1462]: time="2025-04-30T03:31:30.057980961Z" level=info msg="RemovePodSandbox \"bc5511474538cc641c8c5c2c9d9328ac8c703b92c05fbd5aa4ac35499cf49a37\" returns successfully" Apr 30 03:31:30.058686 containerd[1462]: time="2025-04-30T03:31:30.058618466Z" level=info msg="StopPodSandbox for \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\"" Apr 30 03:31:30.059001 containerd[1462]: time="2025-04-30T03:31:30.058741445Z" level=info msg="TearDown network for sandbox \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\" successfully" Apr 30 03:31:30.059001 containerd[1462]: time="2025-04-30T03:31:30.058761896Z" level=info msg="StopPodSandbox for \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\" returns successfully" Apr 30 03:31:30.059405 containerd[1462]: time="2025-04-30T03:31:30.059370403Z" level=info msg="RemovePodSandbox for \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\"" Apr 30 03:31:30.059405 containerd[1462]: time="2025-04-30T03:31:30.059408665Z" level=info msg="Forcibly stopping sandbox \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\"" Apr 30 03:31:30.059563 containerd[1462]: time="2025-04-30T03:31:30.059496392Z" level=info msg="TearDown network for sandbox \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\" successfully" Apr 30 03:31:30.064997 containerd[1462]: time="2025-04-30T03:31:30.064939520Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:31:30.065309 containerd[1462]: time="2025-04-30T03:31:30.065035148Z" level=info msg="RemovePodSandbox \"c189af57d4912624c76c868fa849e763b84b1c50e0f8e6ec6bbe281143b3da45\" returns successfully" Apr 30 03:31:30.065879 containerd[1462]: time="2025-04-30T03:31:30.065802503Z" level=info msg="StopPodSandbox for \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\"" Apr 30 03:31:30.066045 containerd[1462]: time="2025-04-30T03:31:30.066011380Z" level=info msg="TearDown network for sandbox \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" successfully" Apr 30 03:31:30.066045 containerd[1462]: time="2025-04-30T03:31:30.066039782Z" level=info msg="StopPodSandbox for \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" returns successfully" Apr 30 03:31:30.066598 containerd[1462]: time="2025-04-30T03:31:30.066557707Z" level=info msg="RemovePodSandbox for \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\"" Apr 30 03:31:30.066598 containerd[1462]: time="2025-04-30T03:31:30.066595444Z" level=info msg="Forcibly stopping sandbox \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\"" Apr 30 03:31:30.066763 containerd[1462]: time="2025-04-30T03:31:30.066681431Z" level=info msg="TearDown network for sandbox \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" successfully" Apr 30 03:31:30.071546 containerd[1462]: time="2025-04-30T03:31:30.071467494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:31:30.071874 containerd[1462]: time="2025-04-30T03:31:30.071557795Z" level=info msg="RemovePodSandbox \"74f082f84dac2c428582e9008f88714a86faf8a2e21223b527f022a13b292c91\" returns successfully" Apr 30 03:31:30.072302 containerd[1462]: time="2025-04-30T03:31:30.072265448Z" level=info msg="StopPodSandbox for \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\"" Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.119 [WARNING][7313] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.119 [INFO][7313] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.119 [INFO][7313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" iface="eth0" netns="" Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.119 [INFO][7313] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.119 [INFO][7313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.146 [INFO][7320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.146 [INFO][7320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.146 [INFO][7320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.156 [WARNING][7320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.156 [INFO][7320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.158 [INFO][7320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:31:30.160968 containerd[1462]: 2025-04-30 03:31:30.159 [INFO][7313] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:31:30.160968 containerd[1462]: time="2025-04-30T03:31:30.160935205Z" level=info msg="TearDown network for sandbox \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\" successfully" Apr 30 03:31:30.160968 containerd[1462]: time="2025-04-30T03:31:30.160971680Z" level=info msg="StopPodSandbox for \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\" returns successfully" Apr 30 03:31:30.162432 containerd[1462]: time="2025-04-30T03:31:30.161675389Z" level=info msg="RemovePodSandbox for \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\"" Apr 30 03:31:30.162432 containerd[1462]: time="2025-04-30T03:31:30.161713250Z" level=info msg="Forcibly stopping sandbox \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\"" Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.226 [WARNING][7338] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" WorkloadEndpoint="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.226 [INFO][7338] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.226 [INFO][7338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" iface="eth0" netns="" Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.226 [INFO][7338] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.226 [INFO][7338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.260 [INFO][7346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.260 [INFO][7346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.261 [INFO][7346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.270 [WARNING][7346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.270 [INFO][7346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" HandleID="k8s-pod-network.a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Workload="ci--4081--3--3--ca406f5d8fb8a4efa166.c.flatcar--212911.internal-k8s-calico--apiserver--74fd85f4d9--979dw-eth0" Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.273 [INFO][7346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:31:30.276351 containerd[1462]: 2025-04-30 03:31:30.274 [INFO][7338] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a" Apr 30 03:31:30.277240 containerd[1462]: time="2025-04-30T03:31:30.276406985Z" level=info msg="TearDown network for sandbox \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\" successfully" Apr 30 03:31:30.281721 containerd[1462]: time="2025-04-30T03:31:30.281662583Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:31:30.281934 containerd[1462]: time="2025-04-30T03:31:30.281795365Z" level=info msg="RemovePodSandbox \"a8bf7c9143a7f1efc6e5a0b1859f2fbec8d6b9b0746d251ca2a3f85f505c1c7a\" returns successfully" Apr 30 03:31:30.341765 systemd[1]: run-containerd-runc-k8s.io-7e975cc465493508f05fa41b2b325a46aeeaa5cec97b5cfd9a25a51dccc80211-runc.vCRItR.mount: Deactivated successfully.