Feb 13 20:15:07.083090 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:15:07.083136 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:07.083156 kernel: BIOS-provided physical RAM map: Feb 13 20:15:07.083171 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 20:15:07.083185 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 20:15:07.083199 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 20:15:07.083216 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 20:15:07.083236 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 20:15:07.083250 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Feb 13 20:15:07.083264 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Feb 13 20:15:07.083279 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Feb 13 20:15:07.083294 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Feb 13 20:15:07.083308 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 20:15:07.083322 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 20:15:07.083345 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 20:15:07.083361 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 20:15:07.083377 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 20:15:07.083394 kernel: NX (Execute Disable) protection: active Feb 13 20:15:07.083410 kernel: APIC: Static calls initialized Feb 13 20:15:07.083425 kernel: efi: EFI v2.7 by EDK II Feb 13 20:15:07.083442 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Feb 13 20:15:07.083479 kernel: SMBIOS 2.4 present. Feb 13 20:15:07.083494 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 20:15:07.083509 kernel: Hypervisor detected: KVM Feb 13 20:15:07.083531 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:15:07.083547 kernel: kvm-clock: using sched offset of 12310751680 cycles Feb 13 20:15:07.083565 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:15:07.083583 kernel: tsc: Detected 2299.998 MHz processor Feb 13 20:15:07.083600 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:15:07.083618 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:15:07.083635 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 20:15:07.083652 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 20:15:07.083668 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:15:07.083690 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 20:15:07.083707 kernel: Using GB pages for direct mapping Feb 13 20:15:07.083724 kernel: Secure boot disabled Feb 13 20:15:07.083741 kernel: ACPI: Early table checksum verification disabled Feb 13 20:15:07.083758 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 20:15:07.083775 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 20:15:07.083793 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 20:15:07.083837 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 20:15:07.083860 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 20:15:07.083878 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 20:15:07.083904 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 20:15:07.083922 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 20:15:07.083941 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 20:15:07.083959 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 20:15:07.083982 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 20:15:07.084000 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 20:15:07.084019 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 20:15:07.084037 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 20:15:07.084055 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 20:15:07.084074 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 20:15:07.084092 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 20:15:07.084110 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 20:15:07.084128 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 20:15:07.084150 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 20:15:07.084168 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:15:07.084187 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:15:07.084205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:15:07.084223 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 20:15:07.084241 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 20:15:07.084260 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 20:15:07.084279 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 20:15:07.084298 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Feb 13 20:15:07.084320 kernel: Zone ranges: Feb 13 20:15:07.084339 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:15:07.084357 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:15:07.084376 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:15:07.084394 kernel: Movable zone start for each node Feb 13 20:15:07.084413 kernel: Early memory node ranges Feb 13 20:15:07.084431 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 20:15:07.085149 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 20:15:07.085173 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Feb 13 20:15:07.085199 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 20:15:07.085218 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:15:07.085237 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 20:15:07.085255 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:15:07.085274 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 20:15:07.085292 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 20:15:07.085311 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 20:15:07.085330 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 20:15:07.085349 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 20:15:07.085367 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:15:07.085389 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:15:07.085408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:15:07.085425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:15:07.085459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:15:07.085479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:15:07.085498 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:15:07.085516 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:15:07.085534 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 20:15:07.085557 kernel: Booting paravirtualized kernel on KVM Feb 13 20:15:07.085577 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:15:07.085595 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:15:07.085614 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:15:07.085632 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:15:07.085650 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:15:07.085668 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:15:07.085687 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:15:07.085707 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:07.085731 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:15:07.085749 kernel: random: crng init done Feb 13 20:15:07.085765 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 20:15:07.085784 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:15:07.085803 kernel: Fallback order for Node 0: 0 Feb 13 20:15:07.085822 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Feb 13 20:15:07.085840 kernel: Policy zone: Normal Feb 13 20:15:07.085858 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:15:07.085876 kernel: software IO TLB: area num 2. Feb 13 20:15:07.085907 kernel: Memory: 7513396K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 346928K reserved, 0K cma-reserved) Feb 13 20:15:07.085926 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:15:07.085944 kernel: Kernel/User page tables isolation: enabled Feb 13 20:15:07.085962 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:15:07.085981 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:15:07.086000 kernel: Dynamic Preempt: voluntary Feb 13 20:15:07.086018 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:15:07.086038 kernel: rcu: RCU event tracing is enabled. Feb 13 20:15:07.086076 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:15:07.086095 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:15:07.086115 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:15:07.086138 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:15:07.086158 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:15:07.086178 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:15:07.086196 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:15:07.086216 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:15:07.086236 kernel: Console: colour dummy device 80x25 Feb 13 20:15:07.086261 kernel: printk: console [ttyS0] enabled Feb 13 20:15:07.086281 kernel: ACPI: Core revision 20230628 Feb 13 20:15:07.086300 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:15:07.086320 kernel: x2apic enabled Feb 13 20:15:07.086339 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:15:07.086359 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 20:15:07.086379 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:15:07.086399 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 20:15:07.086422 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 20:15:07.086442 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 20:15:07.086481 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:15:07.086500 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:15:07.086519 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:15:07.086539 kernel: Spectre V2 : Mitigation: IBRS Feb 13 20:15:07.086564 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:15:07.086584 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:15:07.086604 kernel: RETBleed: Mitigation: IBRS Feb 13 20:15:07.086629 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:15:07.086649 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 20:15:07.086669 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:15:07.086689 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:15:07.086709 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:15:07.086729 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:15:07.086748 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:15:07.086769 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:15:07.086789 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:15:07.086813 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:15:07.086833 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:15:07.086852 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:15:07.086871 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:15:07.086898 kernel: landlock: Up and running. Feb 13 20:15:07.086918 kernel: SELinux: Initializing. Feb 13 20:15:07.086937 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:15:07.086957 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:15:07.086977 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 20:15:07.086998 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:07.087017 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:07.087037 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:07.087057 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 20:15:07.087077 kernel: signal: max sigframe size: 1776 Feb 13 20:15:07.087103 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:15:07.087124 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:15:07.087143 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:15:07.087161 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:15:07.087186 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:15:07.087206 kernel: .... node #0, CPUs: #1 Feb 13 20:15:07.087224 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 20:15:07.087242 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:15:07.087259 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:15:07.087278 kernel: smpboot: Max logical packages: 1 Feb 13 20:15:07.087294 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 20:15:07.087311 kernel: devtmpfs: initialized Feb 13 20:15:07.087340 kernel: x86/mm: Memory block size: 128MB Feb 13 20:15:07.087363 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 20:15:07.087386 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:15:07.087404 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:15:07.087423 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:15:07.087442 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:15:07.087554 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:15:07.087573 kernel: audit: type=2000 audit(1739477706.271:1): state=initialized audit_enabled=0 res=1 Feb 13 20:15:07.087592 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:15:07.087617 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:15:07.087635 kernel: cpuidle: using governor menu Feb 13 20:15:07.087655 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:15:07.087674 kernel: dca service started, version 1.12.1 Feb 13 20:15:07.087693 kernel: PCI: Using configuration type 1 for base access Feb 13 20:15:07.087712 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:15:07.087731 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:15:07.087750 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:15:07.087770 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:15:07.087793 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:15:07.087812 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:15:07.087831 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:15:07.087850 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:15:07.087869 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:15:07.087899 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 20:15:07.087918 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:15:07.087937 kernel: ACPI: Interpreter enabled Feb 13 20:15:07.087956 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 20:15:07.087979 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:15:07.087998 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:15:07.088018 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 20:15:07.088037 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 20:15:07.088056 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:15:07.088311 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:15:07.088532 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:15:07.088717 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:15:07.088741 kernel: PCI host bridge to bus 0000:00 Feb 13 20:15:07.088925 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:15:07.089090 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:15:07.089252 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:15:07.089413 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 20:15:07.089589 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:15:07.089787 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:15:07.089993 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 20:15:07.090189 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:15:07.090370 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 20:15:07.090575 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 20:15:07.090758 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 20:15:07.090953 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 20:15:07.091153 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:15:07.091337 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 20:15:07.091532 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 20:15:07.091723 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:15:07.091907 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 20:15:07.092088 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 20:15:07.092118 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:15:07.092137 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:15:07.092156 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:15:07.092175 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:15:07.092194 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:15:07.092212 kernel: iommu: Default domain type: Translated Feb 13 20:15:07.092230 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:15:07.092250 kernel: efivars: Registered efivars operations Feb 13 20:15:07.092268 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:15:07.092291 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:15:07.092309 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 20:15:07.092327 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 20:15:07.092345 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 20:15:07.092362 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 20:15:07.092379 kernel: vgaarb: loaded Feb 13 20:15:07.092398 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:15:07.092416 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:15:07.092436 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:15:07.092472 kernel: pnp: PnP ACPI init Feb 13 20:15:07.092495 kernel: pnp: PnP ACPI: found 7 devices Feb 13 20:15:07.092514 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:15:07.092533 kernel: NET: Registered PF_INET protocol family Feb 13 20:15:07.092552 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:15:07.092570 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 20:15:07.092589 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:15:07.092608 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:15:07.092626 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:15:07.092645 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 20:15:07.092667 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:15:07.092685 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:15:07.092704 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:15:07.092722 kernel: NET: Registered PF_XDP protocol family Feb 13 20:15:07.092899 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:15:07.093061 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:15:07.093220 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:15:07.093383 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 20:15:07.093606 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:15:07.093633 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:15:07.093654 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:15:07.093674 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 20:15:07.093693 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:15:07.093712 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:15:07.093731 kernel: clocksource: Switched to clocksource tsc Feb 13 20:15:07.093751 kernel: Initialise system trusted keyrings Feb 13 20:15:07.093777 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 20:15:07.093797 kernel: Key type asymmetric registered Feb 13 20:15:07.093816 kernel: Asymmetric key parser 'x509' registered Feb 13 20:15:07.093835 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:15:07.093855 kernel: io scheduler mq-deadline registered Feb 13 20:15:07.093874 kernel: io scheduler kyber registered Feb 13 20:15:07.093903 kernel: io scheduler bfq registered Feb 13 20:15:07.093923 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:15:07.093943 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:15:07.094136 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 20:15:07.094161 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 20:15:07.094342 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 20:15:07.094366 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:15:07.095535 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 20:15:07.095569 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:15:07.095589 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:15:07.095608 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:15:07.095627 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 20:15:07.095652 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 20:15:07.095848 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 20:15:07.095873 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:15:07.095902 kernel: i8042: Warning: Keylock active Feb 13 20:15:07.095920 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:15:07.095938 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:15:07.098659 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 20:15:07.098843 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 20:15:07.099016 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T20:15:06 UTC (1739477706) Feb 13 20:15:07.099178 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 20:15:07.099202 kernel: intel_pstate: CPU model not supported Feb 13 20:15:07.099222 kernel: pstore: Using crash dump compression: deflate Feb 13 20:15:07.099242 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 20:15:07.099261 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:15:07.099281 kernel: Segment Routing with IPv6 Feb 13 20:15:07.099304 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:15:07.099323 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:15:07.099344 kernel: Key type dns_resolver registered Feb 13 20:15:07.099361 kernel: IPI shorthand broadcast: enabled Feb 13 20:15:07.099377 kernel: sched_clock: Marking stable (848004233, 128465942)->(999018033, -22547858) Feb 13 20:15:07.099392 kernel: registered taskstats version 1 Feb 13 20:15:07.099408 kernel: Loading compiled-in X.509 certificates Feb 13 20:15:07.099425 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:15:07.099442 kernel: Key type .fscrypt registered Feb 13 20:15:07.101499 kernel: Key type fscrypt-provisioning registered Feb 13 20:15:07.101529 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:15:07.101549 kernel: ima: No architecture policies found Feb 13 20:15:07.101567 kernel: clk: Disabling unused clocks Feb 13 20:15:07.101585 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:15:07.101603 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:15:07.101621 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:15:07.101637 kernel: Run /init as init process Feb 13 20:15:07.101665 kernel: with arguments: Feb 13 20:15:07.101687 kernel: /init Feb 13 20:15:07.101703 kernel: with environment: Feb 13 20:15:07.101718 kernel: HOME=/ Feb 13 20:15:07.101736 kernel: TERM=linux Feb 13 20:15:07.101754 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:15:07.101770 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:15:07.101791 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:15:07.101812 systemd[1]: Detected virtualization google. Feb 13 20:15:07.101838 systemd[1]: Detected architecture x86-64. Feb 13 20:15:07.101856 systemd[1]: Running in initrd. Feb 13 20:15:07.101874 systemd[1]: No hostname configured, using default hostname. Feb 13 20:15:07.101923 systemd[1]: Hostname set to . Feb 13 20:15:07.101945 systemd[1]: Initializing machine ID from random generator. Feb 13 20:15:07.101965 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:15:07.101986 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:07.102006 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:07.102033 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:15:07.102054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:15:07.102073 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:15:07.102094 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:15:07.102117 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:15:07.102138 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:15:07.102164 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:07.102185 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:07.102226 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:15:07.102251 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:15:07.102273 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:15:07.102295 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:15:07.102315 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:15:07.102338 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:15:07.102358 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:15:07.102379 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:15:07.102400 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:07.102421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:07.102511 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:07.102536 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:15:07.102557 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:15:07.102582 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:15:07.102603 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:15:07.102622 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:15:07.102642 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:15:07.102663 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:15:07.102683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:07.102740 systemd-journald[183]: Collecting audit messages is disabled. Feb 13 20:15:07.102787 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:15:07.102808 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:07.102828 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:15:07.102854 systemd-journald[183]: Journal started Feb 13 20:15:07.102904 systemd-journald[183]: Runtime Journal (/run/log/journal/b47b0c173abe4e3d810635ef9ced3ec9) is 8.0M, max 148.7M, 140.7M free. Feb 13 20:15:07.101700 systemd-modules-load[184]: Inserted module 'overlay' Feb 13 20:15:07.106643 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:15:07.119725 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:15:07.121613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:15:07.142124 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:07.152495 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:15:07.152755 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:07.154543 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:15:07.163104 kernel: Bridge firewalling registered Feb 13 20:15:07.162189 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 13 20:15:07.167231 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:07.178948 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:15:07.186675 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:15:07.194963 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:15:07.214857 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:07.218978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:07.227023 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:07.243690 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:15:07.250683 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:15:07.263793 dracut-cmdline[215]: dracut-dracut-053 Feb 13 20:15:07.269240 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:07.309736 systemd-resolved[219]: Positive Trust Anchors: Feb 13 20:15:07.309761 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:15:07.309825 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:15:07.315429 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 20:15:07.317417 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:15:07.330705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:15:07.381496 kernel: SCSI subsystem initialized Feb 13 20:15:07.391492 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:15:07.403499 kernel: iscsi: registered transport (tcp) Feb 13 20:15:07.426883 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:15:07.426969 kernel: QLogic iSCSI HBA Driver Feb 13 20:15:07.479480 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:15:07.483706 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:15:07.525733 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:15:07.525822 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:15:07.525852 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:15:07.571506 kernel: raid6: avx2x4 gen() 18253 MB/s Feb 13 20:15:07.588492 kernel: raid6: avx2x2 gen() 18353 MB/s Feb 13 20:15:07.605919 kernel: raid6: avx2x1 gen() 14109 MB/s Feb 13 20:15:07.605999 kernel: raid6: using algorithm avx2x2 gen() 18353 MB/s Feb 13 20:15:07.623861 kernel: raid6: .... xor() 17834 MB/s, rmw enabled Feb 13 20:15:07.623914 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:15:07.647485 kernel: xor: automatically using best checksumming function avx Feb 13 20:15:07.826490 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:15:07.840351 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:15:07.855671 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:07.872789 systemd-udevd[401]: Using default interface naming scheme 'v255'. Feb 13 20:15:07.879727 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:07.891668 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:15:07.925013 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Feb 13 20:15:07.963347 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:15:07.979737 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:15:08.060441 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:08.072642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:15:08.109372 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:15:08.112904 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:15:08.121562 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:08.125609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:15:08.138131 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:15:08.176489 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:15:08.184480 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 20:15:08.185485 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:15:08.191522 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:15:08.241175 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:15:08.241251 kernel: AES CTR mode by8 optimization enabled Feb 13 20:15:08.277827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:15:08.278636 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:08.288416 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:08.294548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:08.294811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:08.299129 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:08.313836 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 20:15:08.332849 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 20:15:08.333108 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 20:15:08.333338 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 20:15:08.333598 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:15:08.333843 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:15:08.333873 kernel: GPT:17805311 != 25165823 Feb 13 20:15:08.333898 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:15:08.333930 kernel: GPT:17805311 != 25165823 Feb 13 20:15:08.333953 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:15:08.333975 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:15:08.334001 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 20:15:08.309850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:08.345720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:08.356700 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:08.395830 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:08.408603 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Feb 13 20:15:08.408644 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (451) Feb 13 20:15:08.430546 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 20:15:08.443362 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 20:15:08.450239 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 20:15:08.450519 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 20:15:08.464188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 20:15:08.471657 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:15:08.484740 disk-uuid[549]: Primary Header is updated. Feb 13 20:15:08.484740 disk-uuid[549]: Secondary Entries is updated. Feb 13 20:15:08.484740 disk-uuid[549]: Secondary Header is updated. Feb 13 20:15:08.496479 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:15:08.518478 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:15:08.525490 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:15:09.526480 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:15:09.526556 disk-uuid[550]: The operation has completed successfully. Feb 13 20:15:09.599026 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:15:09.599183 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:15:09.623651 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:15:09.656700 sh[567]: Success Feb 13 20:15:09.678480 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:15:09.758355 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:15:09.768631 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:15:09.790020 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:15:09.832300 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:15:09.832382 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:09.832421 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:15:09.841740 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:15:09.854280 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:15:09.876496 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:15:09.880491 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:15:09.881426 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:15:09.887658 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:15:09.936695 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:15:09.982264 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:09.982303 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:09.982326 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:15:09.982349 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:15:09.982383 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:15:09.993929 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:15:10.011629 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:10.020643 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:15:10.045705 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:15:10.149866 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:15:10.194631 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:15:10.246033 ignition[659]: Ignition 2.19.0 Feb 13 20:15:10.246065 ignition[659]: Stage: fetch-offline Feb 13 20:15:10.248690 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:15:10.246123 ignition[659]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:10.248924 systemd-networkd[750]: lo: Link UP Feb 13 20:15:10.246139 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:10.248929 systemd-networkd[750]: lo: Gained carrier Feb 13 20:15:10.246308 ignition[659]: parsed url from cmdline: "" Feb 13 20:15:10.251262 systemd-networkd[750]: Enumeration completed Feb 13 20:15:10.246315 ignition[659]: no config URL provided Feb 13 20:15:10.251957 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:15:10.246325 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:15:10.252490 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:15:10.246339 ignition[659]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:15:10.252499 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:15:10.246350 ignition[659]: failed to fetch config: resource requires networking Feb 13 20:15:10.254261 systemd-networkd[750]: eth0: Link UP Feb 13 20:15:10.246794 ignition[659]: Ignition finished successfully Feb 13 20:15:10.254268 systemd-networkd[750]: eth0: Gained carrier Feb 13 20:15:10.351815 ignition[759]: Ignition 2.19.0 Feb 13 20:15:10.254280 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:15:10.351823 ignition[759]: Stage: fetch Feb 13 20:15:10.264528 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.47/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 20:15:10.352039 ignition[759]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:10.287242 systemd[1]: Reached target network.target - Network. Feb 13 20:15:10.352051 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:10.307693 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:15:10.352176 ignition[759]: parsed url from cmdline: "" Feb 13 20:15:10.362474 unknown[759]: fetched base config from "system" Feb 13 20:15:10.352183 ignition[759]: no config URL provided Feb 13 20:15:10.362488 unknown[759]: fetched base config from "system" Feb 13 20:15:10.352192 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:15:10.362498 unknown[759]: fetched user config from "gcp" Feb 13 20:15:10.352203 ignition[759]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:15:10.365526 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:15:10.352226 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 20:15:10.375693 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:15:10.355710 ignition[759]: GET result: OK Feb 13 20:15:10.432902 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:15:10.355828 ignition[759]: parsing config with SHA512: bbdd0cd53be3af2d8e5808e56a4b89e2121849a745e545b47fbe08987323b26adc478d2930474e848692894da1d80b8a9bd4eac434244a658c1e7167233a0763 Feb 13 20:15:10.456954 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:15:10.363077 ignition[759]: fetch: fetch complete Feb 13 20:15:10.477694 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:15:10.363083 ignition[759]: fetch: fetch passed Feb 13 20:15:10.478893 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:15:10.363135 ignition[759]: Ignition finished successfully Feb 13 20:15:10.503811 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:15:10.419669 ignition[766]: Ignition 2.19.0 Feb 13 20:15:10.517767 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:15:10.419680 ignition[766]: Stage: kargs Feb 13 20:15:10.549720 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:15:10.419883 ignition[766]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:10.557739 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:15:10.419895 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:10.586666 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:15:10.420896 ignition[766]: kargs: kargs passed Feb 13 20:15:10.420963 ignition[766]: Ignition finished successfully Feb 13 20:15:10.475223 ignition[772]: Ignition 2.19.0 Feb 13 20:15:10.475232 ignition[772]: Stage: disks Feb 13 20:15:10.475466 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:10.475485 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:10.476654 ignition[772]: disks: disks passed Feb 13 20:15:10.476721 ignition[772]: Ignition finished successfully Feb 13 20:15:10.649977 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:15:10.834543 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:15:10.863603 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:15:10.980483 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:15:10.981243 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:15:10.982133 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:15:11.013585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:15:11.018189 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:15:11.047139 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:15:11.103769 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Feb 13 20:15:11.103820 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:11.103838 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:11.103854 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:15:11.047219 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:15:11.142747 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:15:11.142780 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:15:11.047262 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:15:11.067309 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:15:11.126182 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:15:11.156701 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:15:11.268151 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:15:11.279247 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:15:11.289588 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:15:11.299585 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:15:11.425694 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:15:11.431641 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:15:11.467490 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:11.475683 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:15:11.485710 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:15:11.521893 ignition[900]: INFO : Ignition 2.19.0 Feb 13 20:15:11.529623 ignition[900]: INFO : Stage: mount Feb 13 20:15:11.529623 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:11.529623 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:11.529623 ignition[900]: INFO : mount: mount passed Feb 13 20:15:11.529623 ignition[900]: INFO : Ignition finished successfully Feb 13 20:15:11.526470 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:15:11.537083 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:15:11.559621 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:15:11.810661 systemd-networkd[750]: eth0: Gained IPv6LL Feb 13 20:15:11.987728 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:15:12.031489 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (913) Feb 13 20:15:12.049247 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:12.049347 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:12.049373 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:15:12.070770 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:15:12.070867 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:15:12.073861 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:15:12.111763 ignition[930]: INFO : Ignition 2.19.0 Feb 13 20:15:12.111763 ignition[930]: INFO : Stage: files Feb 13 20:15:12.126597 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:12.126597 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:12.126597 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:15:12.126597 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:15:12.126597 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:15:12.126597 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:15:12.126597 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:15:12.126597 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:15:12.126597 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:15:12.126597 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:15:12.123480 unknown[930]: wrote ssh authorized keys file for user: core Feb 13 20:15:12.261599 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:15:12.546381 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:15:12.546381 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 20:15:12.868534 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:15:13.243670 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:15:13.243670 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:15:13.282712 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:15:13.282712 ignition[930]: INFO : files: files passed Feb 13 20:15:13.282712 ignition[930]: INFO : Ignition finished successfully Feb 13 20:15:13.248234 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:15:13.278702 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:15:13.299602 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:15:13.350016 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:15:13.499619 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:13.499619 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:13.350139 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:15:13.557638 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:13.373926 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:15:13.396862 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:15:13.425682 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:15:13.506179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:15:13.506312 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:15:13.513888 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:15:13.547728 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:15:13.567805 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:15:13.574756 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:15:13.679602 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:15:13.710698 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:15:13.751624 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:15:13.752039 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:13.771939 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:15:13.790909 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:15:13.791099 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:15:13.823909 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:15:13.834913 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:15:13.851908 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:15:13.866904 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:15:13.884911 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:15:13.903918 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:15:13.921906 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:15:13.938932 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:15:13.959950 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:15:13.976943 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:15:14.007688 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:15:14.008074 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:15:14.034794 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:14.035158 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:14.052923 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:15:14.053081 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:14.089840 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:15:14.090039 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:15:14.119879 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:15:14.120089 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:15:14.129940 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:15:14.130109 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:15:14.155822 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:15:14.194621 ignition[983]: INFO : Ignition 2.19.0 Feb 13 20:15:14.194621 ignition[983]: INFO : Stage: umount Feb 13 20:15:14.194621 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:14.194621 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:14.194621 ignition[983]: INFO : umount: umount passed Feb 13 20:15:14.194621 ignition[983]: INFO : Ignition finished successfully Feb 13 20:15:14.208773 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:15:14.210792 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:15:14.210989 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:14.259832 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:15:14.260013 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:15:14.290190 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:15:14.291171 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:15:14.291295 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:15:14.307231 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:15:14.307351 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:15:14.318842 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:15:14.318962 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:15:14.335758 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:15:14.335814 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:15:14.362763 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:15:14.362830 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:15:14.370789 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:15:14.370847 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:15:14.387791 systemd[1]: Stopped target network.target - Network. Feb 13 20:15:14.404740 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:15:14.404817 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:15:14.419790 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:15:14.436745 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:15:14.440524 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:14.451738 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:15:14.480682 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:15:14.488770 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:15:14.488827 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:15:14.503787 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:15:14.503844 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:15:14.520772 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:15:14.520842 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:15:14.537793 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:15:14.537859 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:15:14.554799 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:15:14.554861 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:15:14.572017 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:15:14.576510 systemd-networkd[750]: eth0: DHCPv6 lease lost Feb 13 20:15:14.599797 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:15:14.620025 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:15:14.620183 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:15:14.641006 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:15:14.641380 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:15:14.660156 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:15:14.660210 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:14.674577 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:15:14.707547 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:15:14.707662 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:15:14.726690 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:15:14.726768 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:14.744667 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:15:14.744754 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:14.762643 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:15:15.187530 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 13 20:15:14.762726 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:15:14.781795 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:14.803069 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:15:14.803249 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:14.830155 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:15:14.830258 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:14.849687 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:15:14.849747 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:14.869638 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:15:14.869736 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:15:14.899575 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:15:14.899683 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:15:14.926571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:15:14.926684 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:14.963688 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:15:14.977581 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:15:14.977705 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:14.988720 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:15:14.988819 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:15:14.999699 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:15:14.999791 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:15.018700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:15.018790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:15.040151 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:15:15.040286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:15:15.058057 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:15:15.058194 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:15:15.079911 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:15:15.103696 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:15:15.147973 systemd[1]: Switching root. Feb 13 20:15:15.487582 systemd-journald[183]: Journal stopped Feb 13 20:15:07.083090 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:15:07.083136 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:07.083156 kernel: BIOS-provided physical RAM map: Feb 13 20:15:07.083171 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 20:15:07.083185 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 20:15:07.083199 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 20:15:07.083216 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 20:15:07.083236 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 20:15:07.083250 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Feb 13 20:15:07.083264 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Feb 13 20:15:07.083279 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Feb 13 20:15:07.083294 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Feb 13 20:15:07.083308 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 20:15:07.083322 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 20:15:07.083345 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 20:15:07.083361 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 20:15:07.083377 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 20:15:07.083394 kernel: NX (Execute Disable) protection: active Feb 13 20:15:07.083410 kernel: APIC: Static calls initialized Feb 13 20:15:07.083425 kernel: efi: EFI v2.7 by EDK II Feb 13 20:15:07.083442 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Feb 13 20:15:07.083479 kernel: SMBIOS 2.4 present. Feb 13 20:15:07.083494 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 20:15:07.083509 kernel: Hypervisor detected: KVM Feb 13 20:15:07.083531 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:15:07.083547 kernel: kvm-clock: using sched offset of 12310751680 cycles Feb 13 20:15:07.083565 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:15:07.083583 kernel: tsc: Detected 2299.998 MHz processor Feb 13 20:15:07.083600 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:15:07.083618 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:15:07.083635 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 20:15:07.083652 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 20:15:07.083668 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:15:07.083690 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 20:15:07.083707 kernel: Using GB pages for direct mapping Feb 13 20:15:07.083724 kernel: Secure boot disabled Feb 13 20:15:07.083741 kernel: ACPI: Early table checksum verification disabled Feb 13 20:15:07.083758 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 20:15:07.083775 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 20:15:07.083793 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 20:15:07.083837 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 20:15:07.083860 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 20:15:07.083878 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 20:15:07.083904 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 20:15:07.083922 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 20:15:07.083941 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 20:15:07.083959 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 20:15:07.083982 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 20:15:07.084000 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 20:15:07.084019 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 20:15:07.084037 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 20:15:07.084055 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 20:15:07.084074 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 20:15:07.084092 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 20:15:07.084110 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 20:15:07.084128 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 20:15:07.084150 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 20:15:07.084168 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:15:07.084187 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:15:07.084205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:15:07.084223 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 20:15:07.084241 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 20:15:07.084260 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 20:15:07.084279 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 20:15:07.084298 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Feb 13 20:15:07.084320 kernel: Zone ranges: Feb 13 20:15:07.084339 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:15:07.084357 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:15:07.084376 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:15:07.084394 kernel: Movable zone start for each node Feb 13 20:15:07.084413 kernel: Early memory node ranges Feb 13 20:15:07.084431 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 20:15:07.085149 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 20:15:07.085173 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Feb 13 20:15:07.085199 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 20:15:07.085218 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:15:07.085237 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 20:15:07.085255 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:15:07.085274 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 20:15:07.085292 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 20:15:07.085311 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 20:15:07.085330 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 20:15:07.085349 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 20:15:07.085367 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:15:07.085389 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:15:07.085408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:15:07.085425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:15:07.085459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:15:07.085479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:15:07.085498 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:15:07.085516 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:15:07.085534 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 20:15:07.085557 kernel: Booting paravirtualized kernel on KVM Feb 13 20:15:07.085577 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:15:07.085595 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:15:07.085614 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:15:07.085632 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:15:07.085650 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:15:07.085668 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:15:07.085687 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:15:07.085707 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:07.085731 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:15:07.085749 kernel: random: crng init done Feb 13 20:15:07.085765 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 20:15:07.085784 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:15:07.085803 kernel: Fallback order for Node 0: 0 Feb 13 20:15:07.085822 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Feb 13 20:15:07.085840 kernel: Policy zone: Normal Feb 13 20:15:07.085858 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:15:07.085876 kernel: software IO TLB: area num 2. Feb 13 20:15:07.085907 kernel: Memory: 7513396K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 346928K reserved, 0K cma-reserved) Feb 13 20:15:07.085926 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:15:07.085944 kernel: Kernel/User page tables isolation: enabled Feb 13 20:15:07.085962 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:15:07.085981 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:15:07.086000 kernel: Dynamic Preempt: voluntary Feb 13 20:15:07.086018 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:15:07.086038 kernel: rcu: RCU event tracing is enabled. Feb 13 20:15:07.086076 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:15:07.086095 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:15:07.086115 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:15:07.086138 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:15:07.086158 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:15:07.086178 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:15:07.086196 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:15:07.086216 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:15:07.086236 kernel: Console: colour dummy device 80x25 Feb 13 20:15:07.086261 kernel: printk: console [ttyS0] enabled Feb 13 20:15:07.086281 kernel: ACPI: Core revision 20230628 Feb 13 20:15:07.086300 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:15:07.086320 kernel: x2apic enabled Feb 13 20:15:07.086339 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:15:07.086359 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 20:15:07.086379 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:15:07.086399 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 20:15:07.086422 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 20:15:07.086442 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 20:15:07.086481 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:15:07.086500 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:15:07.086519 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:15:07.086539 kernel: Spectre V2 : Mitigation: IBRS Feb 13 20:15:07.086564 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:15:07.086584 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:15:07.086604 kernel: RETBleed: Mitigation: IBRS Feb 13 20:15:07.086629 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:15:07.086649 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 20:15:07.086669 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:15:07.086689 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:15:07.086709 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:15:07.086729 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:15:07.086748 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:15:07.086769 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:15:07.086789 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:15:07.086813 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:15:07.086833 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:15:07.086852 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:15:07.086871 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:15:07.086898 kernel: landlock: Up and running. Feb 13 20:15:07.086918 kernel: SELinux: Initializing. Feb 13 20:15:07.086937 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:15:07.086957 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:15:07.086977 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 20:15:07.086998 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:07.087017 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:07.087037 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:07.087057 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 20:15:07.087077 kernel: signal: max sigframe size: 1776 Feb 13 20:15:07.087103 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:15:07.087124 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:15:07.087143 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:15:07.087161 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:15:07.087186 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:15:07.087206 kernel: .... node #0, CPUs: #1 Feb 13 20:15:07.087224 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 20:15:07.087242 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:15:07.087259 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:15:07.087278 kernel: smpboot: Max logical packages: 1 Feb 13 20:15:07.087294 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 20:15:07.087311 kernel: devtmpfs: initialized Feb 13 20:15:07.087340 kernel: x86/mm: Memory block size: 128MB Feb 13 20:15:07.087363 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 20:15:07.087386 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:15:07.087404 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:15:07.087423 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:15:07.087442 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:15:07.087554 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:15:07.087573 kernel: audit: type=2000 audit(1739477706.271:1): state=initialized audit_enabled=0 res=1 Feb 13 20:15:07.087592 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:15:07.087617 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:15:07.087635 kernel: cpuidle: using governor menu Feb 13 20:15:07.087655 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:15:07.087674 kernel: dca service started, version 1.12.1 Feb 13 20:15:07.087693 kernel: PCI: Using configuration type 1 for base access Feb 13 20:15:07.087712 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:15:07.087731 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:15:07.087750 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:15:07.087770 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:15:07.087793 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:15:07.087812 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:15:07.087831 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:15:07.087850 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:15:07.087869 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:15:07.087899 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 20:15:07.087918 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:15:07.087937 kernel: ACPI: Interpreter enabled Feb 13 20:15:07.087956 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 20:15:07.087979 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:15:07.087998 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:15:07.088018 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 20:15:07.088037 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 20:15:07.088056 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:15:07.088311 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:15:07.088532 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:15:07.088717 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:15:07.088741 kernel: PCI host bridge to bus 0000:00 Feb 13 20:15:07.088925 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:15:07.089090 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:15:07.089252 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:15:07.089413 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 20:15:07.089589 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:15:07.089787 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:15:07.089993 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 20:15:07.090189 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:15:07.090370 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 20:15:07.090575 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 20:15:07.090758 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 20:15:07.090953 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 20:15:07.091153 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:15:07.091337 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 20:15:07.091532 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 20:15:07.091723 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:15:07.091907 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 20:15:07.092088 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 20:15:07.092118 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:15:07.092137 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:15:07.092156 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:15:07.092175 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:15:07.092194 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:15:07.092212 kernel: iommu: Default domain type: Translated Feb 13 20:15:07.092230 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:15:07.092250 kernel: efivars: Registered efivars operations Feb 13 20:15:07.092268 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:15:07.092291 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:15:07.092309 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 20:15:07.092327 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 20:15:07.092345 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 20:15:07.092362 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 20:15:07.092379 kernel: vgaarb: loaded Feb 13 20:15:07.092398 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:15:07.092416 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:15:07.092436 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:15:07.092472 kernel: pnp: PnP ACPI init Feb 13 20:15:07.092495 kernel: pnp: PnP ACPI: found 7 devices Feb 13 20:15:07.092514 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:15:07.092533 kernel: NET: Registered PF_INET protocol family Feb 13 20:15:07.092552 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:15:07.092570 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 20:15:07.092589 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:15:07.092608 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:15:07.092626 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:15:07.092645 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 20:15:07.092667 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:15:07.092685 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:15:07.092704 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:15:07.092722 kernel: NET: Registered PF_XDP protocol family Feb 13 20:15:07.092899 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:15:07.093061 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:15:07.093220 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:15:07.093383 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 20:15:07.093606 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:15:07.093633 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:15:07.093654 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:15:07.093674 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 20:15:07.093693 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:15:07.093712 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:15:07.093731 kernel: clocksource: Switched to clocksource tsc Feb 13 20:15:07.093751 kernel: Initialise system trusted keyrings Feb 13 20:15:07.093777 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 20:15:07.093797 kernel: Key type asymmetric registered Feb 13 20:15:07.093816 kernel: Asymmetric key parser 'x509' registered Feb 13 20:15:07.093835 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:15:07.093855 kernel: io scheduler mq-deadline registered Feb 13 20:15:07.093874 kernel: io scheduler kyber registered Feb 13 20:15:07.093903 kernel: io scheduler bfq registered Feb 13 20:15:07.093923 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:15:07.093943 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:15:07.094136 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 20:15:07.094161 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 20:15:07.094342 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 20:15:07.094366 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:15:07.095535 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 20:15:07.095569 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:15:07.095589 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:15:07.095608 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:15:07.095627 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 20:15:07.095652 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 20:15:07.095848 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 20:15:07.095873 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:15:07.095902 kernel: i8042: Warning: Keylock active Feb 13 20:15:07.095920 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:15:07.095938 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:15:07.098659 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 20:15:07.098843 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 20:15:07.099016 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T20:15:06 UTC (1739477706) Feb 13 20:15:07.099178 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 20:15:07.099202 kernel: intel_pstate: CPU model not supported Feb 13 20:15:07.099222 kernel: pstore: Using crash dump compression: deflate Feb 13 20:15:07.099242 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 20:15:07.099261 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:15:07.099281 kernel: Segment Routing with IPv6 Feb 13 20:15:07.099304 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:15:07.099323 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:15:07.099344 kernel: Key type dns_resolver registered Feb 13 20:15:07.099361 kernel: IPI shorthand broadcast: enabled Feb 13 20:15:07.099377 kernel: sched_clock: Marking stable (848004233, 128465942)->(999018033, -22547858) Feb 13 20:15:07.099392 kernel: registered taskstats version 1 Feb 13 20:15:07.099408 kernel: Loading compiled-in X.509 certificates Feb 13 20:15:07.099425 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:15:07.099442 kernel: Key type .fscrypt registered Feb 13 20:15:07.101499 kernel: Key type fscrypt-provisioning registered Feb 13 20:15:07.101529 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:15:07.101549 kernel: ima: No architecture policies found Feb 13 20:15:07.101567 kernel: clk: Disabling unused clocks Feb 13 20:15:07.101585 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:15:07.101603 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:15:07.101621 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:15:07.101637 kernel: Run /init as init process Feb 13 20:15:07.101665 kernel: with arguments: Feb 13 20:15:07.101687 kernel: /init Feb 13 20:15:07.101703 kernel: with environment: Feb 13 20:15:07.101718 kernel: HOME=/ Feb 13 20:15:07.101736 kernel: TERM=linux Feb 13 20:15:07.101754 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:15:07.101770 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:15:07.101791 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:15:07.101812 systemd[1]: Detected virtualization google. Feb 13 20:15:07.101838 systemd[1]: Detected architecture x86-64. Feb 13 20:15:07.101856 systemd[1]: Running in initrd. Feb 13 20:15:07.101874 systemd[1]: No hostname configured, using default hostname. Feb 13 20:15:07.101923 systemd[1]: Hostname set to . Feb 13 20:15:07.101945 systemd[1]: Initializing machine ID from random generator. Feb 13 20:15:07.101965 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:15:07.101986 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:07.102006 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:07.102033 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:15:07.102054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:15:07.102073 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:15:07.102094 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:15:07.102117 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:15:07.102138 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:15:07.102164 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:07.102185 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:07.102226 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:15:07.102251 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:15:07.102273 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:15:07.102295 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:15:07.102315 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:15:07.102338 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:15:07.102358 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:15:07.102379 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:15:07.102400 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:07.102421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:07.102511 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:07.102536 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:15:07.102557 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:15:07.102582 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:15:07.102603 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:15:07.102622 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:15:07.102642 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:15:07.102663 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:15:07.102683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:07.102740 systemd-journald[183]: Collecting audit messages is disabled. Feb 13 20:15:07.102787 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:15:07.102808 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:07.102828 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:15:07.102854 systemd-journald[183]: Journal started Feb 13 20:15:07.102904 systemd-journald[183]: Runtime Journal (/run/log/journal/b47b0c173abe4e3d810635ef9ced3ec9) is 8.0M, max 148.7M, 140.7M free. Feb 13 20:15:07.101700 systemd-modules-load[184]: Inserted module 'overlay' Feb 13 20:15:07.106643 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:15:07.119725 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:15:07.121613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:15:07.142124 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:07.152495 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:15:07.152755 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:07.154543 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:15:07.163104 kernel: Bridge firewalling registered Feb 13 20:15:07.162189 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 13 20:15:07.167231 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:07.178948 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:15:07.186675 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:15:07.194963 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:15:07.214857 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:07.218978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:07.227023 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:07.243690 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:15:07.250683 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:15:07.263793 dracut-cmdline[215]: dracut-dracut-053 Feb 13 20:15:07.269240 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:07.309736 systemd-resolved[219]: Positive Trust Anchors: Feb 13 20:15:07.309761 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:15:07.309825 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:15:07.315429 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 20:15:07.317417 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:15:07.330705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:15:07.381496 kernel: SCSI subsystem initialized Feb 13 20:15:07.391492 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:15:07.403499 kernel: iscsi: registered transport (tcp) Feb 13 20:15:07.426883 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:15:07.426969 kernel: QLogic iSCSI HBA Driver Feb 13 20:15:07.479480 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:15:07.483706 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:15:07.525733 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:15:07.525822 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:15:07.525852 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:15:07.571506 kernel: raid6: avx2x4 gen() 18253 MB/s Feb 13 20:15:07.588492 kernel: raid6: avx2x2 gen() 18353 MB/s Feb 13 20:15:07.605919 kernel: raid6: avx2x1 gen() 14109 MB/s Feb 13 20:15:07.605999 kernel: raid6: using algorithm avx2x2 gen() 18353 MB/s Feb 13 20:15:07.623861 kernel: raid6: .... xor() 17834 MB/s, rmw enabled Feb 13 20:15:07.623914 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:15:07.647485 kernel: xor: automatically using best checksumming function avx Feb 13 20:15:07.826490 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:15:07.840351 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:15:07.855671 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:07.872789 systemd-udevd[401]: Using default interface naming scheme 'v255'. Feb 13 20:15:07.879727 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:07.891668 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:15:07.925013 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Feb 13 20:15:07.963347 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:15:07.979737 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:15:08.060441 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:08.072642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:15:08.109372 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:15:08.112904 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:15:08.121562 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:08.125609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:15:08.138131 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:15:08.176489 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:15:08.184480 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 20:15:08.185485 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:15:08.191522 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:15:08.241175 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:15:08.241251 kernel: AES CTR mode by8 optimization enabled Feb 13 20:15:08.277827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:15:08.278636 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:08.288416 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:08.294548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:08.294811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:08.299129 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:08.313836 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 20:15:08.332849 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 20:15:08.333108 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 20:15:08.333338 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 20:15:08.333598 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:15:08.333843 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:15:08.333873 kernel: GPT:17805311 != 25165823 Feb 13 20:15:08.333898 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:15:08.333930 kernel: GPT:17805311 != 25165823 Feb 13 20:15:08.333953 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:15:08.333975 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:15:08.334001 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 20:15:08.309850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:08.345720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:08.356700 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:08.395830 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:08.408603 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Feb 13 20:15:08.408644 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (451) Feb 13 20:15:08.430546 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 20:15:08.443362 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 20:15:08.450239 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 20:15:08.450519 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 20:15:08.464188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 20:15:08.471657 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:15:08.484740 disk-uuid[549]: Primary Header is updated. Feb 13 20:15:08.484740 disk-uuid[549]: Secondary Entries is updated. Feb 13 20:15:08.484740 disk-uuid[549]: Secondary Header is updated. Feb 13 20:15:08.496479 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:15:08.518478 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:15:08.525490 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:15:09.526480 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:15:09.526556 disk-uuid[550]: The operation has completed successfully. Feb 13 20:15:09.599026 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:15:09.599183 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:15:09.623651 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:15:09.656700 sh[567]: Success Feb 13 20:15:09.678480 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:15:09.758355 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:15:09.768631 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:15:09.790020 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:15:09.832300 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:15:09.832382 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:09.832421 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:15:09.841740 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:15:09.854280 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:15:09.876496 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:15:09.880491 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:15:09.881426 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:15:09.887658 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:15:09.936695 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:15:09.982264 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:09.982303 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:09.982326 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:15:09.982349 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:15:09.982383 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:15:09.993929 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:15:10.011629 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:10.020643 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:15:10.045705 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:15:10.149866 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:15:10.194631 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:15:10.246033 ignition[659]: Ignition 2.19.0 Feb 13 20:15:10.246065 ignition[659]: Stage: fetch-offline Feb 13 20:15:10.248690 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:15:10.246123 ignition[659]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:10.248924 systemd-networkd[750]: lo: Link UP Feb 13 20:15:10.246139 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:10.248929 systemd-networkd[750]: lo: Gained carrier Feb 13 20:15:10.246308 ignition[659]: parsed url from cmdline: "" Feb 13 20:15:10.251262 systemd-networkd[750]: Enumeration completed Feb 13 20:15:10.246315 ignition[659]: no config URL provided Feb 13 20:15:10.251957 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:15:10.246325 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:15:10.252490 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:15:10.246339 ignition[659]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:15:10.252499 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:15:10.246350 ignition[659]: failed to fetch config: resource requires networking Feb 13 20:15:10.254261 systemd-networkd[750]: eth0: Link UP Feb 13 20:15:10.246794 ignition[659]: Ignition finished successfully Feb 13 20:15:10.254268 systemd-networkd[750]: eth0: Gained carrier Feb 13 20:15:10.351815 ignition[759]: Ignition 2.19.0 Feb 13 20:15:10.254280 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:15:10.351823 ignition[759]: Stage: fetch Feb 13 20:15:10.264528 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.47/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 20:15:10.352039 ignition[759]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:10.287242 systemd[1]: Reached target network.target - Network. Feb 13 20:15:10.352051 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:10.307693 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:15:10.352176 ignition[759]: parsed url from cmdline: "" Feb 13 20:15:10.362474 unknown[759]: fetched base config from "system" Feb 13 20:15:10.352183 ignition[759]: no config URL provided Feb 13 20:15:10.362488 unknown[759]: fetched base config from "system" Feb 13 20:15:10.352192 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:15:10.362498 unknown[759]: fetched user config from "gcp" Feb 13 20:15:10.352203 ignition[759]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:15:10.365526 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:15:10.352226 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 20:15:10.375693 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:15:10.355710 ignition[759]: GET result: OK Feb 13 20:15:10.432902 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:15:10.355828 ignition[759]: parsing config with SHA512: bbdd0cd53be3af2d8e5808e56a4b89e2121849a745e545b47fbe08987323b26adc478d2930474e848692894da1d80b8a9bd4eac434244a658c1e7167233a0763 Feb 13 20:15:10.456954 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:15:10.363077 ignition[759]: fetch: fetch complete Feb 13 20:15:10.477694 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:15:10.363083 ignition[759]: fetch: fetch passed Feb 13 20:15:10.478893 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:15:10.363135 ignition[759]: Ignition finished successfully Feb 13 20:15:10.503811 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:15:10.419669 ignition[766]: Ignition 2.19.0 Feb 13 20:15:10.517767 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:15:10.419680 ignition[766]: Stage: kargs Feb 13 20:15:10.549720 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:15:10.419883 ignition[766]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:10.557739 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:15:10.419895 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:10.586666 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:15:10.420896 ignition[766]: kargs: kargs passed Feb 13 20:15:10.420963 ignition[766]: Ignition finished successfully Feb 13 20:15:10.475223 ignition[772]: Ignition 2.19.0 Feb 13 20:15:10.475232 ignition[772]: Stage: disks Feb 13 20:15:10.475466 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:10.475485 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:10.476654 ignition[772]: disks: disks passed Feb 13 20:15:10.476721 ignition[772]: Ignition finished successfully Feb 13 20:15:10.649977 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:15:10.834543 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:15:10.863603 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:15:10.980483 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:15:10.981243 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:15:10.982133 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:15:11.013585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:15:11.018189 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:15:11.047139 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:15:11.103769 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Feb 13 20:15:11.103820 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:11.103838 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:11.103854 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:15:11.047219 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:15:11.142747 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:15:11.142780 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:15:11.047262 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:15:11.067309 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:15:11.126182 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:15:11.156701 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:15:11.268151 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:15:11.279247 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:15:11.289588 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:15:11.299585 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:15:11.425694 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:15:11.431641 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:15:11.467490 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:11.475683 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:15:11.485710 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:15:11.521893 ignition[900]: INFO : Ignition 2.19.0 Feb 13 20:15:11.529623 ignition[900]: INFO : Stage: mount Feb 13 20:15:11.529623 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:11.529623 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:11.529623 ignition[900]: INFO : mount: mount passed Feb 13 20:15:11.529623 ignition[900]: INFO : Ignition finished successfully Feb 13 20:15:11.526470 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:15:11.537083 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:15:11.559621 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:15:11.810661 systemd-networkd[750]: eth0: Gained IPv6LL Feb 13 20:15:11.987728 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:15:12.031489 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (913) Feb 13 20:15:12.049247 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:12.049347 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:12.049373 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:15:12.070770 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:15:12.070867 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:15:12.073861 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:15:12.111763 ignition[930]: INFO : Ignition 2.19.0 Feb 13 20:15:12.111763 ignition[930]: INFO : Stage: files Feb 13 20:15:12.126597 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:12.126597 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:12.126597 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:15:12.126597 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:15:12.126597 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:15:12.126597 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:15:12.126597 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:15:12.126597 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:15:12.126597 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:15:12.126597 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:15:12.123480 unknown[930]: wrote ssh authorized keys file for user: core Feb 13 20:15:12.261599 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:15:12.546381 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:15:12.546381 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:15:12.578585 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 20:15:12.868534 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:15:13.243670 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:15:13.243670 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:15:13.282712 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:15:13.282712 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:15:13.282712 ignition[930]: INFO : files: files passed Feb 13 20:15:13.282712 ignition[930]: INFO : Ignition finished successfully Feb 13 20:15:13.248234 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:15:13.278702 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:15:13.299602 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:15:13.350016 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:15:13.499619 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:13.499619 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:13.350139 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:15:13.557638 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:13.373926 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:15:13.396862 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:15:13.425682 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:15:13.506179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:15:13.506312 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:15:13.513888 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:15:13.547728 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:15:13.567805 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:15:13.574756 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:15:13.679602 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:15:13.710698 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:15:13.751624 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:15:13.752039 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:13.771939 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:15:13.790909 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:15:13.791099 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:15:13.823909 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:15:13.834913 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:15:13.851908 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:15:13.866904 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:15:13.884911 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:15:13.903918 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:15:13.921906 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:15:13.938932 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:15:13.959950 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:15:13.976943 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:15:14.007688 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:15:14.008074 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:15:14.034794 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:14.035158 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:14.052923 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:15:14.053081 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:14.089840 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:15:14.090039 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:15:14.119879 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:15:14.120089 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:15:14.129940 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:15:14.130109 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:15:14.155822 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:15:14.194621 ignition[983]: INFO : Ignition 2.19.0 Feb 13 20:15:14.194621 ignition[983]: INFO : Stage: umount Feb 13 20:15:14.194621 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:14.194621 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:15:14.194621 ignition[983]: INFO : umount: umount passed Feb 13 20:15:14.194621 ignition[983]: INFO : Ignition finished successfully Feb 13 20:15:14.208773 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:15:14.210792 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:15:14.210989 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:14.259832 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:15:14.260013 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:15:14.290190 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:15:14.291171 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:15:14.291295 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:15:14.307231 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:15:14.307351 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:15:14.318842 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:15:14.318962 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:15:14.335758 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:15:14.335814 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:15:14.362763 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:15:14.362830 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:15:14.370789 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:15:14.370847 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:15:14.387791 systemd[1]: Stopped target network.target - Network. Feb 13 20:15:14.404740 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:15:14.404817 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:15:14.419790 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:15:14.436745 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:15:14.440524 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:14.451738 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:15:14.480682 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:15:14.488770 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:15:14.488827 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:15:14.503787 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:15:14.503844 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:15:14.520772 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:15:14.520842 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:15:14.537793 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:15:14.537859 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:15:14.554799 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:15:14.554861 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:15:14.572017 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:15:14.576510 systemd-networkd[750]: eth0: DHCPv6 lease lost Feb 13 20:15:14.599797 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:15:14.620025 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:15:14.620183 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:15:14.641006 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:15:14.641380 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:15:14.660156 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:15:14.660210 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:14.674577 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:15:14.707547 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:15:14.707662 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:15:14.726690 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:15:14.726768 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:14.744667 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:15:14.744754 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:14.762643 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:15:15.187530 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 13 20:15:14.762726 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:15:14.781795 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:14.803069 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:15:14.803249 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:14.830155 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:15:14.830258 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:14.849687 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:15:14.849747 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:14.869638 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:15:14.869736 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:15:14.899575 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:15:14.899683 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:15:14.926571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:15:14.926684 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:14.963688 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:15:14.977581 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:15:14.977705 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:14.988720 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:15:14.988819 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:15:14.999699 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:15:14.999791 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:15.018700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:15.018790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:15.040151 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:15:15.040286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:15:15.058057 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:15:15.058194 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:15:15.079911 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:15:15.103696 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:15:15.147973 systemd[1]: Switching root. Feb 13 20:15:15.487582 systemd-journald[183]: Journal stopped Feb 13 20:15:17.839443 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:15:17.839527 kernel: SELinux: policy capability open_perms=1 Feb 13 20:15:17.839549 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:15:17.839567 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:15:17.839584 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:15:17.839602 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:15:17.839622 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:15:17.839644 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:15:17.839663 kernel: audit: type=1403 audit(1739477715.768:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:15:17.839685 systemd[1]: Successfully loaded SELinux policy in 80.314ms. Feb 13 20:15:17.839707 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.792ms. Feb 13 20:15:17.839729 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:15:17.839749 systemd[1]: Detected virtualization google. Feb 13 20:15:17.839770 systemd[1]: Detected architecture x86-64. Feb 13 20:15:17.839795 systemd[1]: Detected first boot. Feb 13 20:15:17.839818 systemd[1]: Initializing machine ID from random generator. Feb 13 20:15:17.839840 zram_generator::config[1024]: No configuration found. Feb 13 20:15:17.839862 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:15:17.839884 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:15:17.839909 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:15:17.839930 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:15:17.839953 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:15:17.839984 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:15:17.840005 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:15:17.840028 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:15:17.840050 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:15:17.840073 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:15:17.840095 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:15:17.840118 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:15:17.840140 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:17.840162 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:17.840182 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:15:17.840204 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:15:17.840225 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:15:17.840252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:15:17.840275 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:15:17.840296 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:17.840317 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:15:17.840339 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:15:17.840360 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:15:17.840388 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:15:17.840411 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:17.840433 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:15:17.840490 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:15:17.840514 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:15:17.840535 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:15:17.840557 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:15:17.840580 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:17.840602 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:17.840623 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:17.840655 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:15:17.840680 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:15:17.840703 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:15:17.840726 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:15:17.840749 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:17.840777 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:15:17.840802 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:15:17.840822 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:15:17.840846 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:15:17.840871 systemd[1]: Reached target machines.target - Containers. Feb 13 20:15:17.840893 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:15:17.840916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:15:17.840939 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:15:17.840978 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:15:17.841003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:15:17.841028 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:15:17.841052 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:15:17.841076 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:15:17.841101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:15:17.841125 kernel: fuse: init (API version 7.39) Feb 13 20:15:17.841147 kernel: ACPI: bus type drm_connector registered Feb 13 20:15:17.841176 kernel: loop: module loaded Feb 13 20:15:17.841200 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:15:17.841224 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:15:17.841247 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:15:17.841270 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:15:17.841294 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:15:17.841318 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:15:17.841342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:15:17.841400 systemd-journald[1111]: Collecting audit messages is disabled. Feb 13 20:15:17.841789 systemd-journald[1111]: Journal started Feb 13 20:15:17.841841 systemd-journald[1111]: Runtime Journal (/run/log/journal/165ae83f3bbc451592bb2f780200dfb7) is 8.0M, max 148.7M, 140.7M free. Feb 13 20:15:16.633518 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:15:16.657244 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:15:16.657860 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:15:17.863514 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:15:17.896491 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:15:17.922491 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:15:17.945249 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:15:17.945345 systemd[1]: Stopped verity-setup.service. Feb 13 20:15:17.969481 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:17.980505 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:15:17.991066 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:15:18.001873 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:15:18.012848 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:15:18.022861 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:15:18.032820 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:15:18.042796 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:15:18.052880 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:15:18.063910 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:18.075919 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:15:18.076158 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:15:18.087914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:15:18.088152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:15:18.099901 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:15:18.100124 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:15:18.109886 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:15:18.110102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:15:18.121888 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:15:18.122140 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:15:18.131891 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:15:18.132118 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:15:18.141877 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:18.151882 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:15:18.162895 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:15:18.173883 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:18.196962 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:15:18.215603 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:15:18.226877 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:15:18.236605 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:15:18.236831 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:15:18.248872 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:15:18.271734 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:15:18.288710 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:15:18.299802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:15:18.306849 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:15:18.324964 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:15:18.333804 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:15:18.339279 systemd-journald[1111]: Time spent on flushing to /var/log/journal/165ae83f3bbc451592bb2f780200dfb7 is 102.548ms for 927 entries. Feb 13 20:15:18.339279 systemd-journald[1111]: System Journal (/var/log/journal/165ae83f3bbc451592bb2f780200dfb7) is 8.0M, max 584.8M, 576.8M free. Feb 13 20:15:18.465088 systemd-journald[1111]: Received client request to flush runtime journal. Feb 13 20:15:18.465316 kernel: loop0: detected capacity change from 0 to 140768 Feb 13 20:15:18.352716 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:15:18.362698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:15:18.375747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:15:18.394684 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:15:18.412687 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:15:18.426652 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:15:18.441683 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:15:18.452768 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:15:18.469641 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:15:18.483183 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:15:18.498132 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:15:18.510261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:18.536871 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:15:18.557627 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:15:18.568481 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:15:18.583033 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:15:18.587922 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Feb 13 20:15:18.587956 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Feb 13 20:15:18.613424 kernel: loop1: detected capacity change from 0 to 54824 Feb 13 20:15:18.612056 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:15:18.636438 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:15:18.647973 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:15:18.655840 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:15:18.703737 kernel: loop2: detected capacity change from 0 to 205544 Feb 13 20:15:18.764025 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:15:18.786675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:15:18.817696 kernel: loop3: detected capacity change from 0 to 142488 Feb 13 20:15:18.846515 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Feb 13 20:15:18.846549 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Feb 13 20:15:18.867580 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:18.921047 kernel: loop4: detected capacity change from 0 to 140768 Feb 13 20:15:18.974614 kernel: loop5: detected capacity change from 0 to 54824 Feb 13 20:15:19.010339 kernel: loop6: detected capacity change from 0 to 205544 Feb 13 20:15:19.056491 kernel: loop7: detected capacity change from 0 to 142488 Feb 13 20:15:19.106535 (sd-merge)[1170]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Feb 13 20:15:19.108028 (sd-merge)[1170]: Merged extensions into '/usr'. Feb 13 20:15:19.118559 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:15:19.118785 systemd[1]: Reloading... Feb 13 20:15:19.267501 zram_generator::config[1192]: No configuration found. Feb 13 20:15:19.475112 ldconfig[1137]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:15:19.554359 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:15:19.661902 systemd[1]: Reloading finished in 541 ms. Feb 13 20:15:19.694377 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:15:19.705285 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:15:19.727725 systemd[1]: Starting ensure-sysext.service... Feb 13 20:15:19.742104 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:15:19.762557 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:15:19.762589 systemd[1]: Reloading... Feb 13 20:15:19.811715 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:15:19.813215 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:15:19.826500 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:15:19.833540 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Feb 13 20:15:19.833695 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Feb 13 20:15:19.846073 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:15:19.846384 systemd-tmpfiles[1237]: Skipping /boot Feb 13 20:15:19.871986 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:15:19.872182 systemd-tmpfiles[1237]: Skipping /boot Feb 13 20:15:19.877481 zram_generator::config[1262]: No configuration found. Feb 13 20:15:20.045700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:15:20.111356 systemd[1]: Reloading finished in 348 ms. Feb 13 20:15:20.128217 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:15:20.145151 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:15:20.169777 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:15:20.189707 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:15:20.208704 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:15:20.228763 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:15:20.249850 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:20.269424 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:15:20.271054 augenrules[1325]: No rules Feb 13 20:15:20.279618 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:15:20.311608 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:15:20.331819 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:15:20.335295 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Feb 13 20:15:20.351918 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:20.352737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:15:20.364459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:15:20.386036 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:15:20.407009 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:15:20.416715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:15:20.426003 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:15:20.435580 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:20.439867 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:15:20.451122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:20.465584 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:15:20.479585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:15:20.479862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:15:20.492522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:15:20.492758 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:15:20.504634 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:15:20.504884 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:15:20.517192 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:15:20.545524 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:15:20.559954 systemd-resolved[1320]: Positive Trust Anchors: Feb 13 20:15:20.563562 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:15:20.563653 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:15:20.592271 systemd-resolved[1320]: Defaulting to hostname 'linux'. Feb 13 20:15:20.593430 systemd[1]: Finished ensure-sysext.service. Feb 13 20:15:20.601813 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:15:20.623016 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:15:20.634701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:20.635022 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:15:20.645743 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:15:20.663549 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:15:20.680698 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:15:20.697320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:15:20.721706 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 20:15:20.736481 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1349) Feb 13 20:15:20.740750 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:15:20.754706 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:15:20.764624 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:15:20.781474 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 20:15:20.787801 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:15:20.787856 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:20.790014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:15:20.790781 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:15:20.796128 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:15:20.807166 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:15:20.808622 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:15:20.816472 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 20:15:20.827129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:15:20.827370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:15:20.842766 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 13 20:15:20.862691 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 20:15:20.862762 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 20:15:20.863821 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:15:20.864072 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:15:20.898349 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 20:15:20.905525 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:15:20.910261 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:15:20.962344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 20:15:20.970480 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:15:20.995878 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Feb 13 20:15:21.021806 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:15:21.033681 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:15:21.033962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:15:21.044475 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:21.055256 systemd-networkd[1383]: lo: Link UP Feb 13 20:15:21.055269 systemd-networkd[1383]: lo: Gained carrier Feb 13 20:15:21.059108 systemd-networkd[1383]: Enumeration completed Feb 13 20:15:21.060197 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:15:21.060325 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:15:21.060361 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:15:21.061626 systemd-networkd[1383]: eth0: Link UP Feb 13 20:15:21.061638 systemd-networkd[1383]: eth0: Gained carrier Feb 13 20:15:21.061665 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:15:21.071951 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:15:21.072541 systemd-networkd[1383]: eth0: DHCPv4 address 10.128.0.47/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 20:15:21.082560 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:15:21.094115 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Feb 13 20:15:21.096320 systemd[1]: Reached target network.target - Network. Feb 13 20:15:21.105720 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:15:21.108043 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:15:21.128471 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:15:21.163802 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:15:21.164336 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:21.169152 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:15:21.181791 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:15:21.203299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:21.215014 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:15:21.228571 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:15:21.238788 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:15:21.249694 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:15:21.260869 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:15:21.270764 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:15:21.281627 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:15:21.292602 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:15:21.292683 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:15:21.301581 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:15:21.312530 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:15:21.324321 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:15:21.336053 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:15:21.346604 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:15:21.356799 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:15:21.366606 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:15:21.374664 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:15:21.374714 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:15:21.386670 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:15:21.401722 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:15:21.422836 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:15:21.438273 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:15:21.465785 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:15:21.475603 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:15:21.479709 jq[1428]: false Feb 13 20:15:21.485155 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:15:21.503843 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 20:15:21.518513 extend-filesystems[1429]: Found loop4 Feb 13 20:15:21.518513 extend-filesystems[1429]: Found loop5 Feb 13 20:15:21.518513 extend-filesystems[1429]: Found loop6 Feb 13 20:15:21.518513 extend-filesystems[1429]: Found loop7 Feb 13 20:15:21.518513 extend-filesystems[1429]: Found sda Feb 13 20:15:21.518513 extend-filesystems[1429]: Found sda1 Feb 13 20:15:21.518513 extend-filesystems[1429]: Found sda2 Feb 13 20:15:21.518513 extend-filesystems[1429]: Found sda3 Feb 13 20:15:21.615993 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 13 20:15:21.619122 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 13 20:15:21.603970 dbus-daemon[1427]: [system] SELinux support is enabled Feb 13 20:15:21.520582 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:15:21.620637 extend-filesystems[1429]: Found usr Feb 13 20:15:21.620637 extend-filesystems[1429]: Found sda4 Feb 13 20:15:21.620637 extend-filesystems[1429]: Found sda6 Feb 13 20:15:21.620637 extend-filesystems[1429]: Found sda7 Feb 13 20:15:21.620637 extend-filesystems[1429]: Found sda9 Feb 13 20:15:21.620637 extend-filesystems[1429]: Checking size of /dev/sda9 Feb 13 20:15:21.620637 extend-filesystems[1429]: Resized partition /dev/sda9 Feb 13 20:15:21.733710 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1349) Feb 13 20:15:21.733818 coreos-metadata[1426]: Feb 13 20:15:21.539 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 13 20:15:21.733818 coreos-metadata[1426]: Feb 13 20:15:21.542 INFO Fetch successful Feb 13 20:15:21.733818 coreos-metadata[1426]: Feb 13 20:15:21.543 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 13 20:15:21.733818 coreos-metadata[1426]: Feb 13 20:15:21.544 INFO Fetch successful Feb 13 20:15:21.733818 coreos-metadata[1426]: Feb 13 20:15:21.544 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 13 20:15:21.733818 coreos-metadata[1426]: Feb 13 20:15:21.544 INFO Fetch successful Feb 13 20:15:21.733818 coreos-metadata[1426]: Feb 13 20:15:21.544 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 13 20:15:21.733818 coreos-metadata[1426]: Feb 13 20:15:21.550 INFO Fetch successful Feb 13 20:15:21.607951 dbus-daemon[1427]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1383 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 20:15:21.544169 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:15:21.734654 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:15:21.734654 extend-filesystems[1444]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 20:15:21.734654 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 13 20:15:21.734654 extend-filesystems[1444]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: ---------------------------------------------------- Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: corporation. Support and training for ntp-4 are Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: available at https://www.nwtime.org/support Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: ---------------------------------------------------- Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: proto: precision = 0.069 usec (-24) Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: basedate set to 2025-02-01 Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: gps base set to 2025-02-02 (week 2352) Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: Listen normally on 3 eth0 10.128.0.47:123 Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: Listen normally on 4 lo [::1]:123 Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: bind(21) AF_INET6 fe80::4001:aff:fe80:2f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:2f%2#123 Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: failed to init interface for address fe80::4001:aff:fe80:2f%2 Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: Listening on routing socket on fd #21 for interface updates Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:15:21.787612 ntpd[1434]: 13 Feb 20:15:21 ntpd[1434]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:15:21.626275 ntpd[1434]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:15:21.574737 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:15:21.792417 extend-filesystems[1429]: Resized filesystem in /dev/sda9 Feb 13 20:15:21.626308 ntpd[1434]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:15:21.642186 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:15:21.626325 ntpd[1434]: ---------------------------------------------------- Feb 13 20:15:21.670306 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 13 20:15:21.626339 ntpd[1434]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:15:21.671155 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:15:21.626354 ntpd[1434]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:15:21.682685 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:15:21.813140 update_engine[1456]: I20250213 20:15:21.790719 1456 main.cc:92] Flatcar Update Engine starting Feb 13 20:15:21.813140 update_engine[1456]: I20250213 20:15:21.794399 1456 update_check_scheduler.cc:74] Next update check in 6m26s Feb 13 20:15:21.626369 ntpd[1434]: corporation. Support and training for ntp-4 are Feb 13 20:15:21.698598 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:15:21.815897 jq[1459]: true Feb 13 20:15:21.626383 ntpd[1434]: available at https://www.nwtime.org/support Feb 13 20:15:21.713570 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:15:21.626397 ntpd[1434]: ---------------------------------------------------- Feb 13 20:15:21.738983 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:15:21.633401 ntpd[1434]: proto: precision = 0.069 usec (-24) Feb 13 20:15:21.740608 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:15:21.633872 ntpd[1434]: basedate set to 2025-02-01 Feb 13 20:15:21.741094 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:15:21.633894 ntpd[1434]: gps base set to 2025-02-02 (week 2352) Feb 13 20:15:21.741316 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:15:21.647765 ntpd[1434]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:15:21.777047 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:15:21.647843 ntpd[1434]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:15:21.777312 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:15:21.648108 ntpd[1434]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:15:21.806082 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:15:21.648167 ntpd[1434]: Listen normally on 3 eth0 10.128.0.47:123 Feb 13 20:15:21.806336 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:15:21.648228 ntpd[1434]: Listen normally on 4 lo [::1]:123 Feb 13 20:15:21.648299 ntpd[1434]: bind(21) AF_INET6 fe80::4001:aff:fe80:2f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 20:15:21.648330 ntpd[1434]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:2f%2#123 Feb 13 20:15:21.648355 ntpd[1434]: failed to init interface for address fe80::4001:aff:fe80:2f%2 Feb 13 20:15:21.648402 ntpd[1434]: Listening on routing socket on fd #21 for interface updates Feb 13 20:15:21.653863 ntpd[1434]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:15:21.653904 ntpd[1434]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:15:21.820580 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:15:21.820626 systemd-logind[1453]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 20:15:21.820656 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:15:21.821212 systemd-logind[1453]: New seat seat0. Feb 13 20:15:21.830237 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:15:21.864101 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:15:21.890268 jq[1464]: true Feb 13 20:15:21.915112 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 20:15:21.928687 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:15:21.939149 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:15:21.953233 tar[1463]: linux-amd64/helm Feb 13 20:15:21.986406 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:15:21.999532 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:15:21.999829 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:15:22.000063 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:15:22.022065 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 20:15:22.033634 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:15:22.033927 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:15:22.053892 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:15:22.056159 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:15:22.075111 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:15:22.114866 systemd[1]: Starting sshkeys.service... Feb 13 20:15:22.183521 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:15:22.205621 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:15:22.340072 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 20:15:22.340314 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 20:15:22.345200 dbus-daemon[1427]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1496 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 20:15:22.366594 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 20:15:22.390473 coreos-metadata[1501]: Feb 13 20:15:22.389 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 13 20:15:22.403864 coreos-metadata[1501]: Feb 13 20:15:22.392 INFO Fetch failed with 404: resource not found Feb 13 20:15:22.403864 coreos-metadata[1501]: Feb 13 20:15:22.392 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 13 20:15:22.403864 coreos-metadata[1501]: Feb 13 20:15:22.394 INFO Fetch successful Feb 13 20:15:22.403864 coreos-metadata[1501]: Feb 13 20:15:22.394 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 13 20:15:22.406486 coreos-metadata[1501]: Feb 13 20:15:22.406 INFO Fetch failed with 404: resource not found Feb 13 20:15:22.406486 coreos-metadata[1501]: Feb 13 20:15:22.406 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 13 20:15:22.410480 coreos-metadata[1501]: Feb 13 20:15:22.409 INFO Fetch failed with 404: resource not found Feb 13 20:15:22.410480 coreos-metadata[1501]: Feb 13 20:15:22.409 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 13 20:15:22.410480 coreos-metadata[1501]: Feb 13 20:15:22.409 INFO Fetch successful Feb 13 20:15:22.412194 unknown[1501]: wrote ssh authorized keys file for user: core Feb 13 20:15:22.464675 update-ssh-keys[1513]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:15:22.468642 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:15:22.485553 systemd[1]: Finished sshkeys.service. Feb 13 20:15:22.509810 polkitd[1508]: Started polkitd version 121 Feb 13 20:15:22.534256 polkitd[1508]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 20:15:22.538710 polkitd[1508]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 20:15:22.543125 polkitd[1508]: Finished loading, compiling and executing 2 rules Feb 13 20:15:22.544668 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 20:15:22.544930 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 20:15:22.545962 polkitd[1508]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 20:15:22.605435 systemd-hostnamed[1496]: Hostname set to (transient) Feb 13 20:15:22.607172 systemd-resolved[1320]: System hostname changed to 'ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal'. Feb 13 20:15:22.615203 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:15:22.628028 ntpd[1434]: bind(24) AF_INET6 fe80::4001:aff:fe80:2f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 20:15:22.629171 ntpd[1434]: 13 Feb 20:15:22 ntpd[1434]: bind(24) AF_INET6 fe80::4001:aff:fe80:2f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 20:15:22.629171 ntpd[1434]: 13 Feb 20:15:22 ntpd[1434]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:2f%2#123 Feb 13 20:15:22.629171 ntpd[1434]: 13 Feb 20:15:22 ntpd[1434]: failed to init interface for address fe80::4001:aff:fe80:2f%2 Feb 13 20:15:22.629013 ntpd[1434]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:2f%2#123 Feb 13 20:15:22.629037 ntpd[1434]: failed to init interface for address fe80::4001:aff:fe80:2f%2 Feb 13 20:15:22.721537 containerd[1465]: time="2025-02-13T20:15:22.719123940Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:15:22.819516 systemd-networkd[1383]: eth0: Gained IPv6LL Feb 13 20:15:22.824989 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:15:22.837402 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:15:22.858241 containerd[1465]: time="2025-02-13T20:15:22.858180595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:22.858750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:22.865882 containerd[1465]: time="2025-02-13T20:15:22.864974091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:15:22.865882 containerd[1465]: time="2025-02-13T20:15:22.865029089Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:15:22.865882 containerd[1465]: time="2025-02-13T20:15:22.865061053Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:15:22.865882 containerd[1465]: time="2025-02-13T20:15:22.865282502Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:15:22.865882 containerd[1465]: time="2025-02-13T20:15:22.865317002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:22.865882 containerd[1465]: time="2025-02-13T20:15:22.865405780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:15:22.868161 containerd[1465]: time="2025-02-13T20:15:22.865435634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:22.868161 containerd[1465]: time="2025-02-13T20:15:22.866575998Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:15:22.868161 containerd[1465]: time="2025-02-13T20:15:22.866611624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:22.868161 containerd[1465]: time="2025-02-13T20:15:22.866637600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:15:22.868161 containerd[1465]: time="2025-02-13T20:15:22.866658356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:22.868161 containerd[1465]: time="2025-02-13T20:15:22.866807250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:22.868161 containerd[1465]: time="2025-02-13T20:15:22.867130735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:22.868161 containerd[1465]: time="2025-02-13T20:15:22.867359582Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:15:22.868161 containerd[1465]: time="2025-02-13T20:15:22.867388337Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:15:22.869356 containerd[1465]: time="2025-02-13T20:15:22.869319422Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:15:22.870352 containerd[1465]: time="2025-02-13T20:15:22.870194949Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:15:22.876619 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:15:22.881498 containerd[1465]: time="2025-02-13T20:15:22.878441270Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:15:22.881498 containerd[1465]: time="2025-02-13T20:15:22.878601826Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:15:22.881498 containerd[1465]: time="2025-02-13T20:15:22.878630155Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:15:22.881498 containerd[1465]: time="2025-02-13T20:15:22.878667357Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:15:22.881498 containerd[1465]: time="2025-02-13T20:15:22.878693954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:15:22.881498 containerd[1465]: time="2025-02-13T20:15:22.878880668Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:15:22.881981 containerd[1465]: time="2025-02-13T20:15:22.881946768Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:15:22.882941 containerd[1465]: time="2025-02-13T20:15:22.882907178Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:15:22.883106 containerd[1465]: time="2025-02-13T20:15:22.883085279Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:15:22.883215 containerd[1465]: time="2025-02-13T20:15:22.883198217Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:15:22.883301 containerd[1465]: time="2025-02-13T20:15:22.883285557Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:15:22.883399 containerd[1465]: time="2025-02-13T20:15:22.883383584Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:15:22.883590 containerd[1465]: time="2025-02-13T20:15:22.883558192Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885484428Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885522830Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885544510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885564327Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885582708Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885629793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885652959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885672577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885695785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885714867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885744697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885779350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885802328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886177 containerd[1465]: time="2025-02-13T20:15:22.885823878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886865 containerd[1465]: time="2025-02-13T20:15:22.885849801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886865 containerd[1465]: time="2025-02-13T20:15:22.885870901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886865 containerd[1465]: time="2025-02-13T20:15:22.885892044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886865 containerd[1465]: time="2025-02-13T20:15:22.885913642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886865 containerd[1465]: time="2025-02-13T20:15:22.885939404Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:15:22.886865 containerd[1465]: time="2025-02-13T20:15:22.885974713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886865 containerd[1465]: time="2025-02-13T20:15:22.885993149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.886865 containerd[1465]: time="2025-02-13T20:15:22.886011245Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:15:22.889761 containerd[1465]: time="2025-02-13T20:15:22.887501965Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:15:22.889761 containerd[1465]: time="2025-02-13T20:15:22.887561624Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:15:22.889761 containerd[1465]: time="2025-02-13T20:15:22.887585332Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:15:22.889761 containerd[1465]: time="2025-02-13T20:15:22.887610882Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:15:22.889761 containerd[1465]: time="2025-02-13T20:15:22.887631069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.889761 containerd[1465]: time="2025-02-13T20:15:22.887657623Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:15:22.889761 containerd[1465]: time="2025-02-13T20:15:22.887676666Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:15:22.889761 containerd[1465]: time="2025-02-13T20:15:22.887697441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:15:22.890180 containerd[1465]: time="2025-02-13T20:15:22.888913463Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:15:22.890180 containerd[1465]: time="2025-02-13T20:15:22.889015193Z" level=info msg="Connect containerd service" Feb 13 20:15:22.890180 containerd[1465]: time="2025-02-13T20:15:22.889077649Z" level=info msg="using legacy CRI server" Feb 13 20:15:22.890180 containerd[1465]: time="2025-02-13T20:15:22.889091881Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:15:22.890180 containerd[1465]: time="2025-02-13T20:15:22.889258992Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:15:22.893080 containerd[1465]: time="2025-02-13T20:15:22.892244139Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:15:22.894475 containerd[1465]: time="2025-02-13T20:15:22.893516164Z" level=info msg="Start subscribing containerd event" Feb 13 20:15:22.894475 containerd[1465]: time="2025-02-13T20:15:22.893598728Z" level=info msg="Start recovering state" Feb 13 20:15:22.894475 containerd[1465]: time="2025-02-13T20:15:22.893698660Z" level=info msg="Start event monitor" Feb 13 20:15:22.894475 containerd[1465]: time="2025-02-13T20:15:22.893724298Z" level=info msg="Start snapshots syncer" Feb 13 20:15:22.894475 containerd[1465]: time="2025-02-13T20:15:22.893738495Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:15:22.894475 containerd[1465]: time="2025-02-13T20:15:22.893752860Z" level=info msg="Start streaming server" Feb 13 20:15:22.896229 containerd[1465]: time="2025-02-13T20:15:22.894981622Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:15:22.896229 containerd[1465]: time="2025-02-13T20:15:22.895067377Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:15:22.896229 containerd[1465]: time="2025-02-13T20:15:22.895140642Z" level=info msg="containerd successfully booted in 0.180709s" Feb 13 20:15:22.899031 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Feb 13 20:15:22.909247 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:15:22.917659 init.sh[1531]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 13 20:15:22.919212 init.sh[1531]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 13 20:15:22.919546 init.sh[1531]: + /usr/bin/google_instance_setup Feb 13 20:15:22.976613 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:15:23.469884 tar[1463]: linux-amd64/LICENSE Feb 13 20:15:23.470434 tar[1463]: linux-amd64/README.md Feb 13 20:15:23.508396 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:15:23.685976 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:15:23.733265 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:15:23.750954 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:15:23.773545 systemd[1]: Started sshd@0-10.128.0.47:22-218.92.0.204:1256.service - OpenSSH per-connection server daemon (218.92.0.204:1256). Feb 13 20:15:23.797282 systemd[1]: Started sshd@1-10.128.0.47:22-139.178.89.65:45044.service - OpenSSH per-connection server daemon (139.178.89.65:45044). Feb 13 20:15:23.799750 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:15:23.800005 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:15:23.831252 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:15:23.875130 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:15:23.896007 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:15:23.914284 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:15:23.917139 instance-setup[1533]: INFO Running google_set_multiqueue. Feb 13 20:15:23.925164 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:15:23.941594 instance-setup[1533]: INFO Set channels for eth0 to 2. Feb 13 20:15:23.947914 instance-setup[1533]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 13 20:15:23.950213 instance-setup[1533]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 13 20:15:23.950928 instance-setup[1533]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 13 20:15:23.953378 instance-setup[1533]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 13 20:15:23.954131 instance-setup[1533]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 13 20:15:23.956295 instance-setup[1533]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 13 20:15:23.956945 instance-setup[1533]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 13 20:15:23.959840 instance-setup[1533]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 13 20:15:23.968226 instance-setup[1533]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 20:15:23.973247 instance-setup[1533]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 20:15:23.975305 instance-setup[1533]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 13 20:15:23.975365 instance-setup[1533]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 13 20:15:23.997009 init.sh[1531]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 13 20:15:24.081996 sshd[1556]: Unable to negotiate with 218.92.0.204 port 1256: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 13 20:15:24.084880 systemd[1]: sshd@0-10.128.0.47:22-218.92.0.204:1256.service: Deactivated successfully. Feb 13 20:15:24.181053 startup-script[1595]: INFO Starting startup scripts. Feb 13 20:15:24.189386 startup-script[1595]: INFO No startup scripts found in metadata. Feb 13 20:15:24.189715 startup-script[1595]: INFO Finished running startup scripts. Feb 13 20:15:24.197422 sshd[1559]: Accepted publickey for core from 139.178.89.65 port 45044 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:15:24.200000 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:24.217018 init.sh[1531]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 13 20:15:24.217018 init.sh[1531]: + daemon_pids=() Feb 13 20:15:24.217018 init.sh[1531]: + for d in accounts clock_skew network Feb 13 20:15:24.218363 init.sh[1531]: + daemon_pids+=($!) Feb 13 20:15:24.218363 init.sh[1531]: + for d in accounts clock_skew network Feb 13 20:15:24.218363 init.sh[1531]: + daemon_pids+=($!) Feb 13 20:15:24.218363 init.sh[1531]: + for d in accounts clock_skew network Feb 13 20:15:24.218363 init.sh[1531]: + daemon_pids+=($!) Feb 13 20:15:24.218363 init.sh[1531]: + NOTIFY_SOCKET=/run/systemd/notify Feb 13 20:15:24.218363 init.sh[1531]: + /usr/bin/systemd-notify --ready Feb 13 20:15:24.218916 init.sh[1600]: + /usr/bin/google_accounts_daemon Feb 13 20:15:24.219780 init.sh[1601]: + /usr/bin/google_clock_skew_daemon Feb 13 20:15:24.220046 init.sh[1602]: + /usr/bin/google_network_daemon Feb 13 20:15:24.223460 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:15:24.249647 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:15:24.261064 systemd[1]: Started oem-gce.service - GCE Linux Agent. Feb 13 20:15:24.283557 init.sh[1531]: + wait -n 1600 1601 1602 Feb 13 20:15:24.288545 systemd-logind[1453]: New session 1 of user core. Feb 13 20:15:24.318274 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:15:24.342658 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:15:24.380434 (systemd)[1606]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:15:24.644756 systemd[1606]: Queued start job for default target default.target. Feb 13 20:15:24.649071 systemd[1606]: Created slice app.slice - User Application Slice. Feb 13 20:15:24.649121 systemd[1606]: Reached target paths.target - Paths. Feb 13 20:15:24.649144 systemd[1606]: Reached target timers.target - Timers. Feb 13 20:15:24.652841 systemd[1606]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:15:24.694416 systemd[1606]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:15:24.694653 systemd[1606]: Reached target sockets.target - Sockets. Feb 13 20:15:24.694682 systemd[1606]: Reached target basic.target - Basic System. Feb 13 20:15:24.694760 systemd[1606]: Reached target default.target - Main User Target. Feb 13 20:15:24.694817 systemd[1606]: Startup finished in 296ms. Feb 13 20:15:24.695005 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:15:24.716261 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:15:24.798650 google-networking[1602]: INFO Starting Google Networking daemon. Feb 13 20:15:24.800742 google-clock-skew[1601]: INFO Starting Google Clock Skew daemon. Feb 13 20:15:24.809095 google-clock-skew[1601]: INFO Clock drift token has changed: 0. Feb 13 20:15:24.864046 groupadd[1623]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 13 20:15:24.873430 groupadd[1623]: group added to /etc/gshadow: name=google-sudoers Feb 13 20:15:24.930788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:24.944750 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:15:24.950976 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:15:24.955842 systemd[1]: Startup finished in 1.021s (kernel) + 8.994s (initrd) + 9.265s (userspace) = 19.281s. Feb 13 20:15:24.963381 groupadd[1623]: new group: name=google-sudoers, GID=1000 Feb 13 20:15:24.986923 systemd[1]: Started sshd@2-10.128.0.47:22-139.178.89.65:50036.service - OpenSSH per-connection server daemon (139.178.89.65:50036). Feb 13 20:15:25.069030 google-accounts[1600]: INFO Starting Google Accounts daemon. Feb 13 20:15:25.082253 google-accounts[1600]: WARNING OS Login not installed. Feb 13 20:15:25.083817 google-accounts[1600]: INFO Creating a new user account for 0. Feb 13 20:15:25.088334 init.sh[1645]: useradd: invalid user name '0': use --badname to ignore Feb 13 20:15:25.089094 google-accounts[1600]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 13 20:15:25.338312 sshd[1637]: Accepted publickey for core from 139.178.89.65 port 50036 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:15:25.339907 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:25.347716 systemd-logind[1453]: New session 2 of user core. Feb 13 20:15:25.351672 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:15:26.000224 systemd-resolved[1320]: Clock change detected. Flushing caches. Feb 13 20:15:26.000976 google-clock-skew[1601]: INFO Synced system time with hardware clock. Feb 13 20:15:26.026667 sshd[1637]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:26.034432 systemd[1]: sshd@2-10.128.0.47:22-139.178.89.65:50036.service: Deactivated successfully. Feb 13 20:15:26.038593 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:15:26.039643 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:15:26.042495 systemd-logind[1453]: Removed session 2. Feb 13 20:15:26.078071 systemd[1]: Started sshd@3-10.128.0.47:22-139.178.89.65:50052.service - OpenSSH per-connection server daemon (139.178.89.65:50052). Feb 13 20:15:26.100589 ntpd[1434]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:2f%2]:123 Feb 13 20:15:26.102506 ntpd[1434]: 13 Feb 20:15:26 ntpd[1434]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:2f%2]:123 Feb 13 20:15:26.316669 kubelet[1632]: E0213 20:15:26.316594 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:15:26.319788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:15:26.320233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:15:26.320618 systemd[1]: kubelet.service: Consumed 1.213s CPU time. Feb 13 20:15:26.382476 sshd[1656]: Accepted publickey for core from 139.178.89.65 port 50052 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:15:26.384377 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:26.389856 systemd-logind[1453]: New session 3 of user core. Feb 13 20:15:26.399336 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:15:26.593870 sshd[1656]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:26.599237 systemd[1]: sshd@3-10.128.0.47:22-139.178.89.65:50052.service: Deactivated successfully. Feb 13 20:15:26.601409 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:15:26.602264 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:15:26.603702 systemd-logind[1453]: Removed session 3. Feb 13 20:15:26.652555 systemd[1]: Started sshd@4-10.128.0.47:22-139.178.89.65:50058.service - OpenSSH per-connection server daemon (139.178.89.65:50058). Feb 13 20:15:26.939411 sshd[1666]: Accepted publickey for core from 139.178.89.65 port 50058 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:15:26.941239 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:26.947466 systemd-logind[1453]: New session 4 of user core. Feb 13 20:15:26.957364 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:15:27.152539 sshd[1666]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:27.156770 systemd[1]: sshd@4-10.128.0.47:22-139.178.89.65:50058.service: Deactivated successfully. Feb 13 20:15:27.159419 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:15:27.161154 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:15:27.162704 systemd-logind[1453]: Removed session 4. Feb 13 20:15:27.204432 systemd[1]: Started sshd@5-10.128.0.47:22-139.178.89.65:50074.service - OpenSSH per-connection server daemon (139.178.89.65:50074). Feb 13 20:15:27.486755 sshd[1673]: Accepted publickey for core from 139.178.89.65 port 50074 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:15:27.488575 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:27.494943 systemd-logind[1453]: New session 5 of user core. Feb 13 20:15:27.502376 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:15:27.678360 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:15:27.678850 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:27.692953 sudo[1676]: pam_unix(sudo:session): session closed for user root Feb 13 20:15:27.735909 sshd[1673]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:27.742342 systemd[1]: sshd@5-10.128.0.47:22-139.178.89.65:50074.service: Deactivated successfully. Feb 13 20:15:27.744485 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:15:27.745442 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:15:27.747038 systemd-logind[1453]: Removed session 5. Feb 13 20:15:27.792513 systemd[1]: Started sshd@6-10.128.0.47:22-139.178.89.65:50088.service - OpenSSH per-connection server daemon (139.178.89.65:50088). Feb 13 20:15:28.089249 sshd[1681]: Accepted publickey for core from 139.178.89.65 port 50088 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:15:28.091438 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:28.097776 systemd-logind[1453]: New session 6 of user core. Feb 13 20:15:28.104384 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:15:28.270545 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:15:28.271037 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:28.275748 sudo[1685]: pam_unix(sudo:session): session closed for user root Feb 13 20:15:28.289084 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:15:28.289635 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:28.305524 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:15:28.310488 auditctl[1688]: No rules Feb 13 20:15:28.311910 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:15:28.312253 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:15:28.315171 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:15:28.364095 augenrules[1706]: No rules Feb 13 20:15:28.365448 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:15:28.367245 sudo[1684]: pam_unix(sudo:session): session closed for user root Feb 13 20:15:28.411735 sshd[1681]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:28.417108 systemd[1]: sshd@6-10.128.0.47:22-139.178.89.65:50088.service: Deactivated successfully. Feb 13 20:15:28.419488 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:15:28.420434 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:15:28.421803 systemd-logind[1453]: Removed session 6. Feb 13 20:15:28.471521 systemd[1]: Started sshd@7-10.128.0.47:22-139.178.89.65:50100.service - OpenSSH per-connection server daemon (139.178.89.65:50100). Feb 13 20:15:28.758338 sshd[1714]: Accepted publickey for core from 139.178.89.65 port 50100 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:15:28.760294 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:28.767363 systemd-logind[1453]: New session 7 of user core. Feb 13 20:15:28.773350 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:15:28.939190 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:15:28.939700 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:29.388548 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:15:29.391247 (dockerd)[1733]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:15:29.834569 dockerd[1733]: time="2025-02-13T20:15:29.834472517Z" level=info msg="Starting up" Feb 13 20:15:29.950047 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport649778784-merged.mount: Deactivated successfully. Feb 13 20:15:30.026606 dockerd[1733]: time="2025-02-13T20:15:30.026536749Z" level=info msg="Loading containers: start." Feb 13 20:15:30.179184 kernel: Initializing XFRM netlink socket Feb 13 20:15:30.279565 systemd-networkd[1383]: docker0: Link UP Feb 13 20:15:30.297922 dockerd[1733]: time="2025-02-13T20:15:30.297863183Z" level=info msg="Loading containers: done." Feb 13 20:15:30.317787 dockerd[1733]: time="2025-02-13T20:15:30.317708585Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:15:30.318041 dockerd[1733]: time="2025-02-13T20:15:30.317846027Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:15:30.318041 dockerd[1733]: time="2025-02-13T20:15:30.317995186Z" level=info msg="Daemon has completed initialization" Feb 13 20:15:30.358061 dockerd[1733]: time="2025-02-13T20:15:30.357886406Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:15:30.358474 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:15:31.243523 containerd[1465]: time="2025-02-13T20:15:31.243464756Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 20:15:31.693930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2808724943.mount: Deactivated successfully. Feb 13 20:15:33.117750 containerd[1465]: time="2025-02-13T20:15:33.117672146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:33.119303 containerd[1465]: time="2025-02-13T20:15:33.119253142Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27983216" Feb 13 20:15:33.120103 containerd[1465]: time="2025-02-13T20:15:33.120040588Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:33.123645 containerd[1465]: time="2025-02-13T20:15:33.123569869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:33.125282 containerd[1465]: time="2025-02-13T20:15:33.125032723Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 1.881512662s" Feb 13 20:15:33.125282 containerd[1465]: time="2025-02-13T20:15:33.125086306Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 20:15:33.127927 containerd[1465]: time="2025-02-13T20:15:33.127870214Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 20:15:34.489610 containerd[1465]: time="2025-02-13T20:15:34.489540002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:34.491191 containerd[1465]: time="2025-02-13T20:15:34.491107624Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24710127" Feb 13 20:15:34.492479 containerd[1465]: time="2025-02-13T20:15:34.492406530Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:34.495926 containerd[1465]: time="2025-02-13T20:15:34.495854222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:34.497354 containerd[1465]: time="2025-02-13T20:15:34.497308799Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.369380019s" Feb 13 20:15:34.497464 containerd[1465]: time="2025-02-13T20:15:34.497361180Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 20:15:34.498426 containerd[1465]: time="2025-02-13T20:15:34.497980081Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 20:15:35.671674 containerd[1465]: time="2025-02-13T20:15:35.671593244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:35.673228 containerd[1465]: time="2025-02-13T20:15:35.673168258Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18654341" Feb 13 20:15:35.674382 containerd[1465]: time="2025-02-13T20:15:35.674307416Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:35.678008 containerd[1465]: time="2025-02-13T20:15:35.677930567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:35.679711 containerd[1465]: time="2025-02-13T20:15:35.679471791Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.181448252s" Feb 13 20:15:35.679711 containerd[1465]: time="2025-02-13T20:15:35.679521603Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 20:15:35.680205 containerd[1465]: time="2025-02-13T20:15:35.680171704Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 20:15:36.423896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:15:36.433496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:36.705484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:36.711305 (kubelet)[1947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:15:36.791430 kubelet[1947]: E0213 20:15:36.791369 1947 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:15:36.798204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:15:36.798500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:15:36.845539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450744598.mount: Deactivated successfully. Feb 13 20:15:37.457331 containerd[1465]: time="2025-02-13T20:15:37.457252727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:37.458746 containerd[1465]: time="2025-02-13T20:15:37.458665527Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30231003" Feb 13 20:15:37.460297 containerd[1465]: time="2025-02-13T20:15:37.460224214Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:37.463283 containerd[1465]: time="2025-02-13T20:15:37.463207704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:37.464305 containerd[1465]: time="2025-02-13T20:15:37.464071342Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.783851447s" Feb 13 20:15:37.464305 containerd[1465]: time="2025-02-13T20:15:37.464142723Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 20:15:37.465180 containerd[1465]: time="2025-02-13T20:15:37.464949739Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:15:37.861536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978973835.mount: Deactivated successfully. Feb 13 20:15:38.886085 containerd[1465]: time="2025-02-13T20:15:38.886015831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:38.887734 containerd[1465]: time="2025-02-13T20:15:38.887670483Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Feb 13 20:15:38.888910 containerd[1465]: time="2025-02-13T20:15:38.888837530Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:38.892861 containerd[1465]: time="2025-02-13T20:15:38.892752570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:38.894172 containerd[1465]: time="2025-02-13T20:15:38.894100952Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.429108442s" Feb 13 20:15:38.894498 containerd[1465]: time="2025-02-13T20:15:38.894364323Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:15:38.895342 containerd[1465]: time="2025-02-13T20:15:38.895290821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:15:39.298008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3881584407.mount: Deactivated successfully. Feb 13 20:15:39.304679 containerd[1465]: time="2025-02-13T20:15:39.304614114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:39.305856 containerd[1465]: time="2025-02-13T20:15:39.305794143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Feb 13 20:15:39.306973 containerd[1465]: time="2025-02-13T20:15:39.306898018Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:39.310052 containerd[1465]: time="2025-02-13T20:15:39.309984239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:39.311611 containerd[1465]: time="2025-02-13T20:15:39.310928164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 415.545648ms" Feb 13 20:15:39.311611 containerd[1465]: time="2025-02-13T20:15:39.310975301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 20:15:39.311932 containerd[1465]: time="2025-02-13T20:15:39.311885450Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 20:15:39.731678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641049124.mount: Deactivated successfully. Feb 13 20:15:41.885621 containerd[1465]: time="2025-02-13T20:15:41.885549112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:41.887431 containerd[1465]: time="2025-02-13T20:15:41.887360625Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786556" Feb 13 20:15:41.888251 containerd[1465]: time="2025-02-13T20:15:41.888202632Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:41.891987 containerd[1465]: time="2025-02-13T20:15:41.891909438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:41.895287 containerd[1465]: time="2025-02-13T20:15:41.895226381Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.583303679s" Feb 13 20:15:41.895287 containerd[1465]: time="2025-02-13T20:15:41.895269608Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 20:15:45.570810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:45.581523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:45.621039 systemd[1]: Reloading requested from client PID 2088 ('systemctl') (unit session-7.scope)... Feb 13 20:15:45.621061 systemd[1]: Reloading... Feb 13 20:15:45.753233 zram_generator::config[2128]: No configuration found. Feb 13 20:15:45.938266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:15:46.040934 systemd[1]: Reloading finished in 419 ms. Feb 13 20:15:46.115673 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:46.120444 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:15:46.120767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:46.128526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:46.424114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:46.438758 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:15:46.491518 kubelet[2182]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:15:46.491518 kubelet[2182]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:15:46.492002 kubelet[2182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:15:46.493378 kubelet[2182]: I0213 20:15:46.493310 2182 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:15:46.722874 kubelet[2182]: I0213 20:15:46.722722 2182 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:15:46.722874 kubelet[2182]: I0213 20:15:46.722757 2182 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:15:46.723601 kubelet[2182]: I0213 20:15:46.723212 2182 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:15:46.767638 kubelet[2182]: I0213 20:15:46.767201 2182 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:15:46.768032 kubelet[2182]: E0213 20:15:46.767996 2182 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.47:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:46.777584 kubelet[2182]: E0213 20:15:46.777540 2182 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:15:46.777746 kubelet[2182]: I0213 20:15:46.777647 2182 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:15:46.784511 kubelet[2182]: I0213 20:15:46.784469 2182 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:15:46.785797 kubelet[2182]: I0213 20:15:46.784686 2182 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:15:46.785797 kubelet[2182]: I0213 20:15:46.784885 2182 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:15:46.785797 kubelet[2182]: I0213 20:15:46.784939 2182 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:15:46.786169 kubelet[2182]: I0213 20:15:46.785388 2182 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:15:46.786169 kubelet[2182]: I0213 20:15:46.785407 2182 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:15:46.786169 kubelet[2182]: I0213 20:15:46.785554 2182 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:15:46.792367 kubelet[2182]: I0213 20:15:46.792023 2182 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:15:46.792367 kubelet[2182]: I0213 20:15:46.792071 2182 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:15:46.792367 kubelet[2182]: I0213 20:15:46.792141 2182 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:15:46.792367 kubelet[2182]: I0213 20:15:46.792166 2182 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:15:46.800879 kubelet[2182]: W0213 20:15:46.800802 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Feb 13 20:15:46.801033 kubelet[2182]: E0213 20:15:46.800896 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.47:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:46.803492 kubelet[2182]: W0213 20:15:46.803391 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Feb 13 20:15:46.803492 kubelet[2182]: E0213 20:15:46.803461 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.47:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:46.803689 kubelet[2182]: I0213 20:15:46.803588 2182 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:15:46.806457 kubelet[2182]: I0213 20:15:46.806262 2182 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:15:46.808151 kubelet[2182]: W0213 20:15:46.807460 2182 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:15:46.809903 kubelet[2182]: I0213 20:15:46.809878 2182 server.go:1269] "Started kubelet" Feb 13 20:15:46.810564 kubelet[2182]: I0213 20:15:46.810494 2182 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:15:46.811869 kubelet[2182]: I0213 20:15:46.811822 2182 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:15:46.814061 kubelet[2182]: I0213 20:15:46.813373 2182 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:15:46.814061 kubelet[2182]: I0213 20:15:46.813753 2182 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:15:46.815725 kubelet[2182]: I0213 20:15:46.814709 2182 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:15:46.824219 kubelet[2182]: I0213 20:15:46.823336 2182 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:15:46.824556 kubelet[2182]: E0213 20:15:46.819665 2182 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.47:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal.1823ddcb694c2c6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,UID:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 20:15:46.809842796 +0000 UTC m=+0.365628829,LastTimestamp:2025-02-13 20:15:46.809842796 +0000 UTC m=+0.365628829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,}" Feb 13 20:15:46.826423 kubelet[2182]: I0213 20:15:46.826391 2182 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:15:46.826704 kubelet[2182]: E0213 20:15:46.826678 2182 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" not found" Feb 13 20:15:46.827479 kubelet[2182]: E0213 20:15:46.827430 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.47:6443: connect: connection refused" interval="200ms" Feb 13 20:15:46.828172 kubelet[2182]: I0213 20:15:46.828140 2182 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:15:46.830160 kubelet[2182]: I0213 20:15:46.829308 2182 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:15:46.830361 kubelet[2182]: W0213 20:15:46.830116 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Feb 13 20:15:46.830545 kubelet[2182]: E0213 20:15:46.830520 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.47:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:46.830769 kubelet[2182]: E0213 20:15:46.830744 2182 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:15:46.831053 kubelet[2182]: I0213 20:15:46.831032 2182 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:15:46.831189 kubelet[2182]: I0213 20:15:46.831174 2182 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:15:46.832321 kubelet[2182]: I0213 20:15:46.832294 2182 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:15:46.846000 kubelet[2182]: I0213 20:15:46.845948 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:15:46.848222 kubelet[2182]: I0213 20:15:46.847914 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:15:46.848222 kubelet[2182]: I0213 20:15:46.847979 2182 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:15:46.848222 kubelet[2182]: I0213 20:15:46.848014 2182 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:15:46.848222 kubelet[2182]: E0213 20:15:46.848090 2182 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:15:46.857462 kubelet[2182]: W0213 20:15:46.857399 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Feb 13 20:15:46.857603 kubelet[2182]: E0213 20:15:46.857504 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.47:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:46.875394 kubelet[2182]: I0213 20:15:46.875327 2182 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:15:46.875394 kubelet[2182]: I0213 20:15:46.875353 2182 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:15:46.875394 kubelet[2182]: I0213 20:15:46.875381 2182 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:15:46.880206 kubelet[2182]: I0213 20:15:46.880163 2182 policy_none.go:49] "None policy: Start" Feb 13 20:15:46.881166 kubelet[2182]: I0213 20:15:46.881077 2182 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:15:46.881166 kubelet[2182]: I0213 20:15:46.881135 2182 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:15:46.891958 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:15:46.909878 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:15:46.915269 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:15:46.927664 kubelet[2182]: E0213 20:15:46.927612 2182 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" not found" Feb 13 20:15:46.931068 kubelet[2182]: I0213 20:15:46.930489 2182 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:15:46.931068 kubelet[2182]: I0213 20:15:46.930781 2182 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:15:46.931068 kubelet[2182]: I0213 20:15:46.930801 2182 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:15:46.931343 kubelet[2182]: I0213 20:15:46.931171 2182 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:15:46.933843 kubelet[2182]: E0213 20:15:46.933815 2182 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" not found" Feb 13 20:15:46.966844 systemd[1]: Created slice kubepods-burstable-pod5ff052f4040ca761769cef1933e1a1d1.slice - libcontainer container kubepods-burstable-pod5ff052f4040ca761769cef1933e1a1d1.slice. Feb 13 20:15:46.987081 systemd[1]: Created slice kubepods-burstable-pod3de6a6aa512a227bd857daaff1848e9d.slice - libcontainer container kubepods-burstable-pod3de6a6aa512a227bd857daaff1848e9d.slice. Feb 13 20:15:47.001720 systemd[1]: Created slice kubepods-burstable-pod7e8d2e4c70c04c92adf7ce97d76b441b.slice - libcontainer container kubepods-burstable-pod7e8d2e4c70c04c92adf7ce97d76b441b.slice. Feb 13 20:15:47.028786 kubelet[2182]: E0213 20:15:47.028725 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.47:6443: connect: connection refused" interval="400ms" Feb 13 20:15:47.033192 kubelet[2182]: I0213 20:15:47.033145 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ff052f4040ca761769cef1933e1a1d1-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"5ff052f4040ca761769cef1933e1a1d1\") " pod="kube-system/kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.033392 kubelet[2182]: I0213 20:15:47.033256 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ff052f4040ca761769cef1933e1a1d1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"5ff052f4040ca761769cef1933e1a1d1\") " pod="kube-system/kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.033392 kubelet[2182]: I0213 20:15:47.033300 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3de6a6aa512a227bd857daaff1848e9d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"3de6a6aa512a227bd857daaff1848e9d\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.033392 kubelet[2182]: I0213 20:15:47.033330 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3de6a6aa512a227bd857daaff1848e9d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"3de6a6aa512a227bd857daaff1848e9d\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.033392 kubelet[2182]: I0213 20:15:47.033362 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3de6a6aa512a227bd857daaff1848e9d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"3de6a6aa512a227bd857daaff1848e9d\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.033613 kubelet[2182]: I0213 20:15:47.033397 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e8d2e4c70c04c92adf7ce97d76b441b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"7e8d2e4c70c04c92adf7ce97d76b441b\") " pod="kube-system/kube-scheduler-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.033613 kubelet[2182]: I0213 20:15:47.033431 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ff052f4040ca761769cef1933e1a1d1-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"5ff052f4040ca761769cef1933e1a1d1\") " pod="kube-system/kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.033613 kubelet[2182]: I0213 20:15:47.033461 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3de6a6aa512a227bd857daaff1848e9d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"3de6a6aa512a227bd857daaff1848e9d\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.033613 kubelet[2182]: I0213 20:15:47.033490 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3de6a6aa512a227bd857daaff1848e9d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"3de6a6aa512a227bd857daaff1848e9d\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.043531 kubelet[2182]: I0213 20:15:47.043463 2182 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.043934 kubelet[2182]: E0213 20:15:47.043892 2182 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.47:6443/api/v1/nodes\": dial tcp 10.128.0.47:6443: connect: connection refused" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.249191 kubelet[2182]: I0213 20:15:47.249004 2182 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.249571 kubelet[2182]: E0213 20:15:47.249495 2182 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.47:6443/api/v1/nodes\": dial tcp 10.128.0.47:6443: connect: connection refused" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.284471 containerd[1465]: time="2025-02-13T20:15:47.284004071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,Uid:5ff052f4040ca761769cef1933e1a1d1,Namespace:kube-system,Attempt:0,}" Feb 13 20:15:47.300547 containerd[1465]: time="2025-02-13T20:15:47.300451965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,Uid:3de6a6aa512a227bd857daaff1848e9d,Namespace:kube-system,Attempt:0,}" Feb 13 20:15:47.305330 containerd[1465]: time="2025-02-13T20:15:47.305278398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,Uid:7e8d2e4c70c04c92adf7ce97d76b441b,Namespace:kube-system,Attempt:0,}" Feb 13 20:15:47.429310 kubelet[2182]: E0213 20:15:47.429238 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.47:6443: connect: connection refused" interval="800ms" Feb 13 20:15:47.667222 kubelet[2182]: I0213 20:15:47.654705 2182 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:47.667222 kubelet[2182]: E0213 20:15:47.655159 2182 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.47:6443/api/v1/nodes\": dial tcp 10.128.0.47:6443: connect: connection refused" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:48.230599 kubelet[2182]: E0213 20:15:48.230531 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.47:6443: connect: connection refused" interval="1.6s" Feb 13 20:15:48.246390 kubelet[2182]: W0213 20:15:48.246294 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Feb 13 20:15:48.246671 kubelet[2182]: E0213 20:15:48.246395 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.47:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:48.296583 kubelet[2182]: W0213 20:15:48.296486 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Feb 13 20:15:48.296583 kubelet[2182]: E0213 20:15:48.296584 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.47:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:48.320852 kubelet[2182]: W0213 20:15:48.320761 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Feb 13 20:15:48.320852 kubelet[2182]: E0213 20:15:48.320858 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.47:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:48.347104 kubelet[2182]: W0213 20:15:48.347015 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Feb 13 20:15:48.347104 kubelet[2182]: E0213 20:15:48.347106 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.47:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:48.461535 kubelet[2182]: I0213 20:15:48.461490 2182 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:48.461951 kubelet[2182]: E0213 20:15:48.461895 2182 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.47:6443/api/v1/nodes\": dial tcp 10.128.0.47:6443: connect: connection refused" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:48.807102 kubelet[2182]: E0213 20:15:48.807044 2182 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.47:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:49.207286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924693486.mount: Deactivated successfully. Feb 13 20:15:49.215357 containerd[1465]: time="2025-02-13T20:15:49.215281213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:49.216691 containerd[1465]: time="2025-02-13T20:15:49.216629452Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:49.217946 containerd[1465]: time="2025-02-13T20:15:49.217877293Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:15:49.218714 containerd[1465]: time="2025-02-13T20:15:49.218657760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Feb 13 20:15:49.220037 containerd[1465]: time="2025-02-13T20:15:49.219966743Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:49.221429 containerd[1465]: time="2025-02-13T20:15:49.221378184Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:49.222083 containerd[1465]: time="2025-02-13T20:15:49.222021408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:15:49.224418 containerd[1465]: time="2025-02-13T20:15:49.224310109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:49.227763 containerd[1465]: time="2025-02-13T20:15:49.227103520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.942937227s" Feb 13 20:15:49.229643 containerd[1465]: time="2025-02-13T20:15:49.229594361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.924236338s" Feb 13 20:15:49.231677 containerd[1465]: time="2025-02-13T20:15:49.231629400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.931061746s" Feb 13 20:15:49.437562 containerd[1465]: time="2025-02-13T20:15:49.436622492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:49.437562 containerd[1465]: time="2025-02-13T20:15:49.436708466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:49.437562 containerd[1465]: time="2025-02-13T20:15:49.436734455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:49.437562 containerd[1465]: time="2025-02-13T20:15:49.436861329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:49.439698 containerd[1465]: time="2025-02-13T20:15:49.439586602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:49.439841 containerd[1465]: time="2025-02-13T20:15:49.439668816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:49.439841 containerd[1465]: time="2025-02-13T20:15:49.439711101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:49.439970 containerd[1465]: time="2025-02-13T20:15:49.439943382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:49.440688 containerd[1465]: time="2025-02-13T20:15:49.440374539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:49.440688 containerd[1465]: time="2025-02-13T20:15:49.440455441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:49.440688 containerd[1465]: time="2025-02-13T20:15:49.440484572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:49.440688 containerd[1465]: time="2025-02-13T20:15:49.440596189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:49.488371 systemd[1]: Started cri-containerd-4618e2d44d852259ea7308edab2162c1649530f1919fc214c91b806a6bee1424.scope - libcontainer container 4618e2d44d852259ea7308edab2162c1649530f1919fc214c91b806a6bee1424. Feb 13 20:15:49.490293 systemd[1]: Started cri-containerd-7710468539afc9db2a0443071863454c351aa06a3b2e8ec824b91424e8f2577e.scope - libcontainer container 7710468539afc9db2a0443071863454c351aa06a3b2e8ec824b91424e8f2577e. Feb 13 20:15:49.492408 systemd[1]: Started cri-containerd-bbda1ade8a021db7dc80bba4aa61c3c6214457e9774359a8b721e54a19f3d572.scope - libcontainer container bbda1ade8a021db7dc80bba4aa61c3c6214457e9774359a8b721e54a19f3d572. Feb 13 20:15:49.581342 containerd[1465]: time="2025-02-13T20:15:49.580814904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,Uid:7e8d2e4c70c04c92adf7ce97d76b441b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4618e2d44d852259ea7308edab2162c1649530f1919fc214c91b806a6bee1424\"" Feb 13 20:15:49.589282 kubelet[2182]: E0213 20:15:49.588819 2182 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-21291" Feb 13 20:15:49.593532 containerd[1465]: time="2025-02-13T20:15:49.593488496Z" level=info msg="CreateContainer within sandbox \"4618e2d44d852259ea7308edab2162c1649530f1919fc214c91b806a6bee1424\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:15:49.610566 containerd[1465]: time="2025-02-13T20:15:49.610464993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,Uid:5ff052f4040ca761769cef1933e1a1d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7710468539afc9db2a0443071863454c351aa06a3b2e8ec824b91424e8f2577e\"" Feb 13 20:15:49.613824 kubelet[2182]: E0213 20:15:49.613479 2182 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-21291" Feb 13 20:15:49.616112 containerd[1465]: time="2025-02-13T20:15:49.616067996Z" level=info msg="CreateContainer within sandbox \"7710468539afc9db2a0443071863454c351aa06a3b2e8ec824b91424e8f2577e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:15:49.619924 containerd[1465]: time="2025-02-13T20:15:49.619877949Z" level=info msg="CreateContainer within sandbox \"4618e2d44d852259ea7308edab2162c1649530f1919fc214c91b806a6bee1424\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6eba046da6dedc54288f960e6b84107ac59a93af1d7e4de7ce637308c8237cf6\"" Feb 13 20:15:49.623153 containerd[1465]: time="2025-02-13T20:15:49.621646809Z" level=info msg="StartContainer for \"6eba046da6dedc54288f960e6b84107ac59a93af1d7e4de7ce637308c8237cf6\"" Feb 13 20:15:49.628393 containerd[1465]: time="2025-02-13T20:15:49.628307119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,Uid:3de6a6aa512a227bd857daaff1848e9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbda1ade8a021db7dc80bba4aa61c3c6214457e9774359a8b721e54a19f3d572\"" Feb 13 20:15:49.632518 kubelet[2182]: E0213 20:15:49.632479 2182 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flat" Feb 13 20:15:49.635141 containerd[1465]: time="2025-02-13T20:15:49.635083132Z" level=info msg="CreateContainer within sandbox \"bbda1ade8a021db7dc80bba4aa61c3c6214457e9774359a8b721e54a19f3d572\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:15:49.639510 containerd[1465]: time="2025-02-13T20:15:49.639469175Z" level=info msg="CreateContainer within sandbox \"7710468539afc9db2a0443071863454c351aa06a3b2e8ec824b91424e8f2577e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"03e2eff0c704883f4596835fc18e06eef318284e5ae78f498354e8f9cc397cb8\"" Feb 13 20:15:49.640333 containerd[1465]: time="2025-02-13T20:15:49.640307220Z" level=info msg="StartContainer for \"03e2eff0c704883f4596835fc18e06eef318284e5ae78f498354e8f9cc397cb8\"" Feb 13 20:15:49.659141 containerd[1465]: time="2025-02-13T20:15:49.657513146Z" level=info msg="CreateContainer within sandbox \"bbda1ade8a021db7dc80bba4aa61c3c6214457e9774359a8b721e54a19f3d572\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc5c5dd1b41c97e4168500a24870c098ea0e3dd6decf76a4d067bfcd93d6e72e\"" Feb 13 20:15:49.662741 containerd[1465]: time="2025-02-13T20:15:49.662682135Z" level=info msg="StartContainer for \"fc5c5dd1b41c97e4168500a24870c098ea0e3dd6decf76a4d067bfcd93d6e72e\"" Feb 13 20:15:49.676382 systemd[1]: Started cri-containerd-6eba046da6dedc54288f960e6b84107ac59a93af1d7e4de7ce637308c8237cf6.scope - libcontainer container 6eba046da6dedc54288f960e6b84107ac59a93af1d7e4de7ce637308c8237cf6. Feb 13 20:15:49.702338 systemd[1]: Started cri-containerd-03e2eff0c704883f4596835fc18e06eef318284e5ae78f498354e8f9cc397cb8.scope - libcontainer container 03e2eff0c704883f4596835fc18e06eef318284e5ae78f498354e8f9cc397cb8. Feb 13 20:15:49.735455 systemd[1]: Started cri-containerd-fc5c5dd1b41c97e4168500a24870c098ea0e3dd6decf76a4d067bfcd93d6e72e.scope - libcontainer container fc5c5dd1b41c97e4168500a24870c098ea0e3dd6decf76a4d067bfcd93d6e72e. Feb 13 20:15:49.798231 containerd[1465]: time="2025-02-13T20:15:49.797955585Z" level=info msg="StartContainer for \"6eba046da6dedc54288f960e6b84107ac59a93af1d7e4de7ce637308c8237cf6\" returns successfully" Feb 13 20:15:49.831479 kubelet[2182]: E0213 20:15:49.831387 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.47:6443: connect: connection refused" interval="3.2s" Feb 13 20:15:49.855917 containerd[1465]: time="2025-02-13T20:15:49.855240847Z" level=info msg="StartContainer for \"03e2eff0c704883f4596835fc18e06eef318284e5ae78f498354e8f9cc397cb8\" returns successfully" Feb 13 20:15:49.863714 containerd[1465]: time="2025-02-13T20:15:49.863661927Z" level=info msg="StartContainer for \"fc5c5dd1b41c97e4168500a24870c098ea0e3dd6decf76a4d067bfcd93d6e72e\" returns successfully" Feb 13 20:15:50.067640 kubelet[2182]: I0213 20:15:50.066234 2182 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:53.001868 kubelet[2182]: E0213 20:15:53.001708 2182 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal.1823ddcb694c2c6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,UID:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 20:15:46.809842796 +0000 UTC m=+0.365628829,LastTimestamp:2025-02-13 20:15:46.809842796 +0000 UTC m=+0.365628829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,}" Feb 13 20:15:53.011785 kubelet[2182]: I0213 20:15:53.011735 2182 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:53.011962 kubelet[2182]: E0213 20:15:53.011805 2182 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\": node \"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" not found" Feb 13 20:15:53.069100 kubelet[2182]: E0213 20:15:53.068917 2182 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal.1823ddcb6a8ae0a6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,UID:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 20:15:46.830729382 +0000 UTC m=+0.386515417,LastTimestamp:2025-02-13 20:15:46.830729382 +0000 UTC m=+0.386515417,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,}" Feb 13 20:15:53.114982 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 20:15:53.121868 kubelet[2182]: E0213 20:15:53.121811 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="6.4s" Feb 13 20:15:53.127494 kubelet[2182]: E0213 20:15:53.127336 2182 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal.1823ddcb6cfab0f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,UID:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 20:15:46.871611636 +0000 UTC m=+0.427397660,LastTimestamp:2025-02-13 20:15:46.871611636 +0000 UTC m=+0.427397660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal,}" Feb 13 20:15:53.806506 kubelet[2182]: I0213 20:15:53.806408 2182 apiserver.go:52] "Watching apiserver" Feb 13 20:15:53.829803 kubelet[2182]: I0213 20:15:53.829732 2182 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:15:54.910316 kubelet[2182]: W0213 20:15:54.910155 2182 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:15:54.912386 systemd[1]: Reloading requested from client PID 2457 ('systemctl') (unit session-7.scope)... Feb 13 20:15:54.912409 systemd[1]: Reloading... Feb 13 20:15:55.042152 zram_generator::config[2497]: No configuration found. Feb 13 20:15:55.190147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:15:55.311666 systemd[1]: Reloading finished in 398 ms. Feb 13 20:15:55.371340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:55.379299 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:15:55.379655 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:55.387598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:55.643869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:55.657739 (kubelet)[2545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:15:55.732844 kubelet[2545]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:15:55.732844 kubelet[2545]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:15:55.732844 kubelet[2545]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:15:55.734811 kubelet[2545]: I0213 20:15:55.733440 2545 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:15:55.742471 kubelet[2545]: I0213 20:15:55.742418 2545 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:15:55.742471 kubelet[2545]: I0213 20:15:55.742450 2545 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:15:55.742865 kubelet[2545]: I0213 20:15:55.742827 2545 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:15:55.744517 kubelet[2545]: I0213 20:15:55.744487 2545 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:15:55.747326 kubelet[2545]: I0213 20:15:55.747066 2545 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:15:55.751643 kubelet[2545]: E0213 20:15:55.751604 2545 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:15:55.751766 kubelet[2545]: I0213 20:15:55.751652 2545 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:15:55.757147 kubelet[2545]: I0213 20:15:55.756285 2545 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:15:55.757147 kubelet[2545]: I0213 20:15:55.756468 2545 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:15:55.757147 kubelet[2545]: I0213 20:15:55.756642 2545 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:15:55.757389 kubelet[2545]: I0213 20:15:55.756673 2545 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:15:55.757544 kubelet[2545]: I0213 20:15:55.757112 2545 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:15:55.757637 kubelet[2545]: I0213 20:15:55.757625 2545 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:15:55.757768 kubelet[2545]: I0213 20:15:55.757757 2545 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:15:55.758014 kubelet[2545]: I0213 20:15:55.757999 2545 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:15:55.758141 kubelet[2545]: I0213 20:15:55.758107 2545 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:15:55.758283 kubelet[2545]: I0213 20:15:55.758270 2545 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:15:55.758369 kubelet[2545]: I0213 20:15:55.758359 2545 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:15:55.763631 kubelet[2545]: I0213 20:15:55.763602 2545 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:15:55.765221 kubelet[2545]: I0213 20:15:55.764422 2545 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:15:55.767385 kubelet[2545]: I0213 20:15:55.766025 2545 server.go:1269] "Started kubelet" Feb 13 20:15:55.767869 kubelet[2545]: I0213 20:15:55.767834 2545 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:15:55.772149 kubelet[2545]: I0213 20:15:55.769082 2545 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:15:55.772149 kubelet[2545]: I0213 20:15:55.771455 2545 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:15:55.775146 kubelet[2545]: I0213 20:15:55.773055 2545 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:15:55.775568 kubelet[2545]: I0213 20:15:55.775546 2545 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:15:55.779152 kubelet[2545]: I0213 20:15:55.778067 2545 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:15:55.779152 kubelet[2545]: E0213 20:15:55.778461 2545 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" not found" Feb 13 20:15:55.783162 kubelet[2545]: I0213 20:15:55.781093 2545 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:15:55.783815 kubelet[2545]: I0213 20:15:55.781380 2545 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:15:55.784550 kubelet[2545]: I0213 20:15:55.781579 2545 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:15:55.786531 kubelet[2545]: I0213 20:15:55.786495 2545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:15:55.790147 kubelet[2545]: I0213 20:15:55.788270 2545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:15:55.790147 kubelet[2545]: I0213 20:15:55.788317 2545 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:15:55.790147 kubelet[2545]: I0213 20:15:55.788339 2545 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:15:55.790147 kubelet[2545]: E0213 20:15:55.788399 2545 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:15:55.806909 kubelet[2545]: I0213 20:15:55.806852 2545 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:15:55.806909 kubelet[2545]: I0213 20:15:55.806881 2545 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:15:55.807173 kubelet[2545]: I0213 20:15:55.806994 2545 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:15:55.884507 kubelet[2545]: I0213 20:15:55.884471 2545 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:15:55.884507 kubelet[2545]: I0213 20:15:55.884497 2545 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:15:55.884723 kubelet[2545]: I0213 20:15:55.884534 2545 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:15:55.887149 kubelet[2545]: I0213 20:15:55.884822 2545 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:15:55.887149 kubelet[2545]: I0213 20:15:55.884861 2545 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:15:55.887149 kubelet[2545]: I0213 20:15:55.884890 2545 policy_none.go:49] "None policy: Start" Feb 13 20:15:55.887149 kubelet[2545]: I0213 20:15:55.886201 2545 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:15:55.887149 kubelet[2545]: I0213 20:15:55.886229 2545 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:15:55.887149 kubelet[2545]: I0213 20:15:55.886500 2545 state_mem.go:75] "Updated machine memory state" Feb 13 20:15:55.889380 kubelet[2545]: E0213 20:15:55.889292 2545 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:15:55.895926 kubelet[2545]: I0213 20:15:55.894988 2545 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:15:55.897185 kubelet[2545]: I0213 20:15:55.896047 2545 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:15:55.897185 kubelet[2545]: I0213 20:15:55.896069 2545 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:15:55.897185 kubelet[2545]: I0213 20:15:55.896422 2545 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:15:56.020953 kubelet[2545]: I0213 20:15:56.020782 2545 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.029896 kubelet[2545]: I0213 20:15:56.029841 2545 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.030060 kubelet[2545]: I0213 20:15:56.029972 2545 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.099197 kubelet[2545]: W0213 20:15:56.098939 2545 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:15:56.101947 kubelet[2545]: W0213 20:15:56.101237 2545 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:15:56.101947 kubelet[2545]: W0213 20:15:56.101297 2545 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:15:56.101947 kubelet[2545]: E0213 20:15:56.101652 2545 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.185679 kubelet[2545]: I0213 20:15:56.185598 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3de6a6aa512a227bd857daaff1848e9d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"3de6a6aa512a227bd857daaff1848e9d\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.185679 kubelet[2545]: I0213 20:15:56.185677 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ff052f4040ca761769cef1933e1a1d1-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"5ff052f4040ca761769cef1933e1a1d1\") " pod="kube-system/kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.186014 kubelet[2545]: I0213 20:15:56.185708 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ff052f4040ca761769cef1933e1a1d1-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"5ff052f4040ca761769cef1933e1a1d1\") " pod="kube-system/kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.186014 kubelet[2545]: I0213 20:15:56.185752 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3de6a6aa512a227bd857daaff1848e9d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"3de6a6aa512a227bd857daaff1848e9d\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.186014 kubelet[2545]: I0213 20:15:56.185780 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3de6a6aa512a227bd857daaff1848e9d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"3de6a6aa512a227bd857daaff1848e9d\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.186014 kubelet[2545]: I0213 20:15:56.185806 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3de6a6aa512a227bd857daaff1848e9d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"3de6a6aa512a227bd857daaff1848e9d\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.186014 kubelet[2545]: I0213 20:15:56.185830 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ff052f4040ca761769cef1933e1a1d1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"5ff052f4040ca761769cef1933e1a1d1\") " pod="kube-system/kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.186014 kubelet[2545]: I0213 20:15:56.185861 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3de6a6aa512a227bd857daaff1848e9d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"3de6a6aa512a227bd857daaff1848e9d\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.186014 kubelet[2545]: I0213 20:15:56.185893 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e8d2e4c70c04c92adf7ce97d76b441b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal\" (UID: \"7e8d2e4c70c04c92adf7ce97d76b441b\") " pod="kube-system/kube-scheduler-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:15:56.761142 kubelet[2545]: I0213 20:15:56.761080 2545 apiserver.go:52] "Watching apiserver" Feb 13 20:15:56.784477 kubelet[2545]: I0213 20:15:56.784407 2545 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:15:56.919691 kubelet[2545]: I0213 20:15:56.919003 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" podStartSLOduration=2.91895446 podStartE2EDuration="2.91895446s" podCreationTimestamp="2025-02-13 20:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:15:56.918745455 +0000 UTC m=+1.252795664" watchObservedRunningTime="2025-02-13 20:15:56.91895446 +0000 UTC m=+1.253004658" Feb 13 20:15:56.919691 kubelet[2545]: I0213 20:15:56.919203 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" podStartSLOduration=0.919191799 podStartE2EDuration="919.191799ms" podCreationTimestamp="2025-02-13 20:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:15:56.901660877 +0000 UTC m=+1.235711112" watchObservedRunningTime="2025-02-13 20:15:56.919191799 +0000 UTC m=+1.253241984" Feb 13 20:15:56.949149 kubelet[2545]: I0213 20:15:56.948292 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" podStartSLOduration=0.948272013 podStartE2EDuration="948.272013ms" podCreationTimestamp="2025-02-13 20:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:15:56.933707743 +0000 UTC m=+1.267757952" watchObservedRunningTime="2025-02-13 20:15:56.948272013 +0000 UTC m=+1.282322221" Feb 13 20:16:01.215318 kubelet[2545]: I0213 20:16:01.215251 2545 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:16:01.216285 containerd[1465]: time="2025-02-13T20:16:01.216243102Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:16:01.216773 kubelet[2545]: I0213 20:16:01.216575 2545 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:16:01.830778 sudo[1717]: pam_unix(sudo:session): session closed for user root Feb 13 20:16:01.876539 sshd[1714]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:01.885204 systemd[1]: sshd@7-10.128.0.47:22-139.178.89.65:50100.service: Deactivated successfully. Feb 13 20:16:01.887932 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:16:01.888427 systemd[1]: session-7.scope: Consumed 6.518s CPU time, 155.9M memory peak, 0B memory swap peak. Feb 13 20:16:01.889529 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:16:01.891515 systemd-logind[1453]: Removed session 7. Feb 13 20:16:02.107658 systemd[1]: Created slice kubepods-besteffort-poda400666a_e400_4691_8bd3_b7a12d5dad68.slice - libcontainer container kubepods-besteffort-poda400666a_e400_4691_8bd3_b7a12d5dad68.slice. Feb 13 20:16:02.126554 kubelet[2545]: I0213 20:16:02.126327 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8hqn\" (UniqueName: \"kubernetes.io/projected/a400666a-e400-4691-8bd3-b7a12d5dad68-kube-api-access-t8hqn\") pod \"kube-proxy-c8rgw\" (UID: \"a400666a-e400-4691-8bd3-b7a12d5dad68\") " pod="kube-system/kube-proxy-c8rgw" Feb 13 20:16:02.126554 kubelet[2545]: I0213 20:16:02.126395 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a400666a-e400-4691-8bd3-b7a12d5dad68-kube-proxy\") pod \"kube-proxy-c8rgw\" (UID: \"a400666a-e400-4691-8bd3-b7a12d5dad68\") " pod="kube-system/kube-proxy-c8rgw" Feb 13 20:16:02.126554 kubelet[2545]: I0213 20:16:02.126425 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a400666a-e400-4691-8bd3-b7a12d5dad68-xtables-lock\") pod \"kube-proxy-c8rgw\" (UID: \"a400666a-e400-4691-8bd3-b7a12d5dad68\") " pod="kube-system/kube-proxy-c8rgw" Feb 13 20:16:02.126554 kubelet[2545]: I0213 20:16:02.126449 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a400666a-e400-4691-8bd3-b7a12d5dad68-lib-modules\") pod \"kube-proxy-c8rgw\" (UID: \"a400666a-e400-4691-8bd3-b7a12d5dad68\") " pod="kube-system/kube-proxy-c8rgw" Feb 13 20:16:02.153187 systemd[1]: Created slice kubepods-besteffort-pod8da23e2d_50e2_409a_ae59_8ff314f046a2.slice - libcontainer container kubepods-besteffort-pod8da23e2d_50e2_409a_ae59_8ff314f046a2.slice. Feb 13 20:16:02.227394 kubelet[2545]: I0213 20:16:02.227320 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8da23e2d-50e2-409a-ae59-8ff314f046a2-var-lib-calico\") pod \"tigera-operator-76c4976dd7-rfkqk\" (UID: \"8da23e2d-50e2-409a-ae59-8ff314f046a2\") " pod="tigera-operator/tigera-operator-76c4976dd7-rfkqk" Feb 13 20:16:02.227987 kubelet[2545]: I0213 20:16:02.227428 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r24wg\" (UniqueName: \"kubernetes.io/projected/8da23e2d-50e2-409a-ae59-8ff314f046a2-kube-api-access-r24wg\") pod \"tigera-operator-76c4976dd7-rfkqk\" (UID: \"8da23e2d-50e2-409a-ae59-8ff314f046a2\") " pod="tigera-operator/tigera-operator-76c4976dd7-rfkqk" Feb 13 20:16:02.420625 containerd[1465]: time="2025-02-13T20:16:02.420397701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8rgw,Uid:a400666a-e400-4691-8bd3-b7a12d5dad68,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:02.454963 containerd[1465]: time="2025-02-13T20:16:02.454753231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:02.456723 containerd[1465]: time="2025-02-13T20:16:02.455432933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:02.456723 containerd[1465]: time="2025-02-13T20:16:02.456527855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:02.456723 containerd[1465]: time="2025-02-13T20:16:02.456663542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:02.462714 containerd[1465]: time="2025-02-13T20:16:02.462652118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-rfkqk,Uid:8da23e2d-50e2-409a-ae59-8ff314f046a2,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:16:02.507369 systemd[1]: Started cri-containerd-0394585caa256afb12e566daaa63033c44ad294c49e829814d246c2665f5334e.scope - libcontainer container 0394585caa256afb12e566daaa63033c44ad294c49e829814d246c2665f5334e. Feb 13 20:16:02.511187 containerd[1465]: time="2025-02-13T20:16:02.510953387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:02.511187 containerd[1465]: time="2025-02-13T20:16:02.511041634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:02.511187 containerd[1465]: time="2025-02-13T20:16:02.511097211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:02.511608 containerd[1465]: time="2025-02-13T20:16:02.511294051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:02.546600 systemd[1]: Started cri-containerd-318a07c74d042337c431fbaf96bee6a05733a7741ed271950b9365f3c2c4333d.scope - libcontainer container 318a07c74d042337c431fbaf96bee6a05733a7741ed271950b9365f3c2c4333d. Feb 13 20:16:02.558111 containerd[1465]: time="2025-02-13T20:16:02.558062099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8rgw,Uid:a400666a-e400-4691-8bd3-b7a12d5dad68,Namespace:kube-system,Attempt:0,} returns sandbox id \"0394585caa256afb12e566daaa63033c44ad294c49e829814d246c2665f5334e\"" Feb 13 20:16:02.563939 containerd[1465]: time="2025-02-13T20:16:02.563742960Z" level=info msg="CreateContainer within sandbox \"0394585caa256afb12e566daaa63033c44ad294c49e829814d246c2665f5334e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:16:02.588665 containerd[1465]: time="2025-02-13T20:16:02.588569998Z" level=info msg="CreateContainer within sandbox \"0394585caa256afb12e566daaa63033c44ad294c49e829814d246c2665f5334e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"731f5f1c3b49cfcc21598fc4ee7fde5e15e6c97e4c8a883698c02bb9ca05b8a3\"" Feb 13 20:16:02.591452 containerd[1465]: time="2025-02-13T20:16:02.591385993Z" level=info msg="StartContainer for \"731f5f1c3b49cfcc21598fc4ee7fde5e15e6c97e4c8a883698c02bb9ca05b8a3\"" Feb 13 20:16:02.637940 containerd[1465]: time="2025-02-13T20:16:02.637851488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-rfkqk,Uid:8da23e2d-50e2-409a-ae59-8ff314f046a2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"318a07c74d042337c431fbaf96bee6a05733a7741ed271950b9365f3c2c4333d\"" Feb 13 20:16:02.643198 containerd[1465]: time="2025-02-13T20:16:02.642270787Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:16:02.643601 systemd[1]: Started cri-containerd-731f5f1c3b49cfcc21598fc4ee7fde5e15e6c97e4c8a883698c02bb9ca05b8a3.scope - libcontainer container 731f5f1c3b49cfcc21598fc4ee7fde5e15e6c97e4c8a883698c02bb9ca05b8a3. Feb 13 20:16:02.682063 containerd[1465]: time="2025-02-13T20:16:02.682012575Z" level=info msg="StartContainer for \"731f5f1c3b49cfcc21598fc4ee7fde5e15e6c97e4c8a883698c02bb9ca05b8a3\" returns successfully" Feb 13 20:16:04.462913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1487879693.mount: Deactivated successfully. Feb 13 20:16:05.207176 containerd[1465]: time="2025-02-13T20:16:05.207100101Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:05.208642 containerd[1465]: time="2025-02-13T20:16:05.208556544Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:16:05.209972 containerd[1465]: time="2025-02-13T20:16:05.209883389Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:05.213872 containerd[1465]: time="2025-02-13T20:16:05.213494690Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:05.214716 containerd[1465]: time="2025-02-13T20:16:05.214670878Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.572350826s" Feb 13 20:16:05.214828 containerd[1465]: time="2025-02-13T20:16:05.214723026Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:16:05.218700 containerd[1465]: time="2025-02-13T20:16:05.218660815Z" level=info msg="CreateContainer within sandbox \"318a07c74d042337c431fbaf96bee6a05733a7741ed271950b9365f3c2c4333d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:16:05.234087 containerd[1465]: time="2025-02-13T20:16:05.234029500Z" level=info msg="CreateContainer within sandbox \"318a07c74d042337c431fbaf96bee6a05733a7741ed271950b9365f3c2c4333d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dc79e6444acde15d57920c5fea917ccd0866104b47a4c594823b1ed624fa4420\"" Feb 13 20:16:05.235071 containerd[1465]: time="2025-02-13T20:16:05.235024541Z" level=info msg="StartContainer for \"dc79e6444acde15d57920c5fea917ccd0866104b47a4c594823b1ed624fa4420\"" Feb 13 20:16:05.277361 systemd[1]: Started cri-containerd-dc79e6444acde15d57920c5fea917ccd0866104b47a4c594823b1ed624fa4420.scope - libcontainer container dc79e6444acde15d57920c5fea917ccd0866104b47a4c594823b1ed624fa4420. Feb 13 20:16:05.316855 containerd[1465]: time="2025-02-13T20:16:05.316778126Z" level=info msg="StartContainer for \"dc79e6444acde15d57920c5fea917ccd0866104b47a4c594823b1ed624fa4420\" returns successfully" Feb 13 20:16:05.537522 kubelet[2545]: I0213 20:16:05.536702 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c8rgw" podStartSLOduration=3.536677042 podStartE2EDuration="3.536677042s" podCreationTimestamp="2025-02-13 20:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:16:02.889418131 +0000 UTC m=+7.223468338" watchObservedRunningTime="2025-02-13 20:16:05.536677042 +0000 UTC m=+9.870727269" Feb 13 20:16:07.093341 kubelet[2545]: I0213 20:16:07.092409 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-rfkqk" podStartSLOduration=2.517338226 podStartE2EDuration="5.092384849s" podCreationTimestamp="2025-02-13 20:16:02 +0000 UTC" firstStartedPulling="2025-02-13 20:16:02.641407571 +0000 UTC m=+6.975457766" lastFinishedPulling="2025-02-13 20:16:05.216454179 +0000 UTC m=+9.550504389" observedRunningTime="2025-02-13 20:16:05.907551167 +0000 UTC m=+10.241601377" watchObservedRunningTime="2025-02-13 20:16:07.092384849 +0000 UTC m=+11.426435059" Feb 13 20:16:07.149272 update_engine[1456]: I20250213 20:16:07.149184 1456 update_attempter.cc:509] Updating boot flags... Feb 13 20:16:07.215463 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2921) Feb 13 20:16:07.313154 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2922) Feb 13 20:16:07.461172 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2922) Feb 13 20:16:08.605567 systemd[1]: Created slice kubepods-besteffort-pod1f1a2616_5cff_4a31_b66b_b5aec071871c.slice - libcontainer container kubepods-besteffort-pod1f1a2616_5cff_4a31_b66b_b5aec071871c.slice. Feb 13 20:16:08.676952 kubelet[2545]: I0213 20:16:08.676672 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1f1a2616-5cff-4a31-b66b-b5aec071871c-typha-certs\") pod \"calico-typha-79d6676c6b-9sp5g\" (UID: \"1f1a2616-5cff-4a31-b66b-b5aec071871c\") " pod="calico-system/calico-typha-79d6676c6b-9sp5g" Feb 13 20:16:08.676952 kubelet[2545]: I0213 20:16:08.676796 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97j56\" (UniqueName: \"kubernetes.io/projected/1f1a2616-5cff-4a31-b66b-b5aec071871c-kube-api-access-97j56\") pod \"calico-typha-79d6676c6b-9sp5g\" (UID: \"1f1a2616-5cff-4a31-b66b-b5aec071871c\") " pod="calico-system/calico-typha-79d6676c6b-9sp5g" Feb 13 20:16:08.676952 kubelet[2545]: I0213 20:16:08.676860 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f1a2616-5cff-4a31-b66b-b5aec071871c-tigera-ca-bundle\") pod \"calico-typha-79d6676c6b-9sp5g\" (UID: \"1f1a2616-5cff-4a31-b66b-b5aec071871c\") " pod="calico-system/calico-typha-79d6676c6b-9sp5g" Feb 13 20:16:08.871514 systemd[1]: Created slice kubepods-besteffort-podccc30c1d_827a_43a6_86e1_2eae9ab22004.slice - libcontainer container kubepods-besteffort-podccc30c1d_827a_43a6_86e1_2eae9ab22004.slice. Feb 13 20:16:08.879150 kubelet[2545]: I0213 20:16:08.879028 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ccc30c1d-827a-43a6-86e1-2eae9ab22004-node-certs\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.879150 kubelet[2545]: I0213 20:16:08.879079 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ccc30c1d-827a-43a6-86e1-2eae9ab22004-var-run-calico\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.879150 kubelet[2545]: I0213 20:16:08.879108 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ccc30c1d-827a-43a6-86e1-2eae9ab22004-cni-bin-dir\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.879414 kubelet[2545]: I0213 20:16:08.879376 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ccc30c1d-827a-43a6-86e1-2eae9ab22004-cni-log-dir\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.879482 kubelet[2545]: I0213 20:16:08.879434 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ccc30c1d-827a-43a6-86e1-2eae9ab22004-var-lib-calico\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.879482 kubelet[2545]: I0213 20:16:08.879466 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccc30c1d-827a-43a6-86e1-2eae9ab22004-lib-modules\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.879580 kubelet[2545]: I0213 20:16:08.879492 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccc30c1d-827a-43a6-86e1-2eae9ab22004-tigera-ca-bundle\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.879580 kubelet[2545]: I0213 20:16:08.879530 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ccc30c1d-827a-43a6-86e1-2eae9ab22004-cni-net-dir\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.879580 kubelet[2545]: I0213 20:16:08.879560 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ccc30c1d-827a-43a6-86e1-2eae9ab22004-flexvol-driver-host\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.879749 kubelet[2545]: I0213 20:16:08.879593 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ccc30c1d-827a-43a6-86e1-2eae9ab22004-policysync\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.879749 kubelet[2545]: I0213 20:16:08.879623 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlqhn\" (UniqueName: \"kubernetes.io/projected/ccc30c1d-827a-43a6-86e1-2eae9ab22004-kube-api-access-vlqhn\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.879749 kubelet[2545]: I0213 20:16:08.879652 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccc30c1d-827a-43a6-86e1-2eae9ab22004-xtables-lock\") pod \"calico-node-j4nf5\" (UID: \"ccc30c1d-827a-43a6-86e1-2eae9ab22004\") " pod="calico-system/calico-node-j4nf5" Feb 13 20:16:08.912071 containerd[1465]: time="2025-02-13T20:16:08.911926777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79d6676c6b-9sp5g,Uid:1f1a2616-5cff-4a31-b66b-b5aec071871c,Namespace:calico-system,Attempt:0,}" Feb 13 20:16:08.960394 containerd[1465]: time="2025-02-13T20:16:08.960180715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:08.960870 containerd[1465]: time="2025-02-13T20:16:08.960280655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:08.961985 containerd[1465]: time="2025-02-13T20:16:08.961370935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:08.961985 containerd[1465]: time="2025-02-13T20:16:08.961537746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:09.018351 kubelet[2545]: E0213 20:16:09.018280 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.018351 kubelet[2545]: W0213 20:16:09.018314 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.018351 kubelet[2545]: E0213 20:16:09.018346 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.037431 systemd[1]: Started cri-containerd-1c2290409456d7cb3c5ce45aa8b004a46f2dd6ef60ec3014df1e8752dccaaa5a.scope - libcontainer container 1c2290409456d7cb3c5ce45aa8b004a46f2dd6ef60ec3014df1e8752dccaaa5a. Feb 13 20:16:09.041851 kubelet[2545]: E0213 20:16:09.041736 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.041851 kubelet[2545]: W0213 20:16:09.041791 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.042997 kubelet[2545]: E0213 20:16:09.041816 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.043992 kubelet[2545]: E0213 20:16:09.043790 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtgvf" podUID="55870946-44e5-4646-b49c-964c3d25ad4a" Feb 13 20:16:09.076473 kubelet[2545]: E0213 20:16:09.076155 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.076473 kubelet[2545]: W0213 20:16:09.076188 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.076473 kubelet[2545]: E0213 20:16:09.076343 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.079801 kubelet[2545]: E0213 20:16:09.079437 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.079801 kubelet[2545]: W0213 20:16:09.079461 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.079801 kubelet[2545]: E0213 20:16:09.079483 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.080436 kubelet[2545]: E0213 20:16:09.080201 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.080436 kubelet[2545]: W0213 20:16:09.080221 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.080436 kubelet[2545]: E0213 20:16:09.080240 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.081513 kubelet[2545]: E0213 20:16:09.081262 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.081513 kubelet[2545]: W0213 20:16:09.081281 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.081513 kubelet[2545]: E0213 20:16:09.081300 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.082308 kubelet[2545]: E0213 20:16:09.081966 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.082308 kubelet[2545]: W0213 20:16:09.082004 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.082308 kubelet[2545]: E0213 20:16:09.082023 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.083142 kubelet[2545]: E0213 20:16:09.082844 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.083142 kubelet[2545]: W0213 20:16:09.082863 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.083142 kubelet[2545]: E0213 20:16:09.082881 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.085062 kubelet[2545]: E0213 20:16:09.084381 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.085062 kubelet[2545]: W0213 20:16:09.084400 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.085062 kubelet[2545]: E0213 20:16:09.084417 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.086444 kubelet[2545]: E0213 20:16:09.086016 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.086444 kubelet[2545]: W0213 20:16:09.086035 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.086444 kubelet[2545]: E0213 20:16:09.086093 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.088106 kubelet[2545]: E0213 20:16:09.087543 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.088106 kubelet[2545]: W0213 20:16:09.087784 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.088106 kubelet[2545]: E0213 20:16:09.087815 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.089042 kubelet[2545]: E0213 20:16:09.088688 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.089042 kubelet[2545]: W0213 20:16:09.088727 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.089042 kubelet[2545]: E0213 20:16:09.088745 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.089902 kubelet[2545]: E0213 20:16:09.089635 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.089902 kubelet[2545]: W0213 20:16:09.089656 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.089902 kubelet[2545]: E0213 20:16:09.089675 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.090717 kubelet[2545]: E0213 20:16:09.090438 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.090717 kubelet[2545]: W0213 20:16:09.090458 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.090717 kubelet[2545]: E0213 20:16:09.090475 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.091339 kubelet[2545]: E0213 20:16:09.091088 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.091339 kubelet[2545]: W0213 20:16:09.091168 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.091339 kubelet[2545]: E0213 20:16:09.091190 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.092057 kubelet[2545]: E0213 20:16:09.091882 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.092057 kubelet[2545]: W0213 20:16:09.091901 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.092057 kubelet[2545]: E0213 20:16:09.091918 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.092841 kubelet[2545]: E0213 20:16:09.092661 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.092841 kubelet[2545]: W0213 20:16:09.092683 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.092841 kubelet[2545]: E0213 20:16:09.092702 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.094774 kubelet[2545]: E0213 20:16:09.094638 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.094774 kubelet[2545]: W0213 20:16:09.094658 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.094774 kubelet[2545]: E0213 20:16:09.094677 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.095475 kubelet[2545]: E0213 20:16:09.095316 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.095475 kubelet[2545]: W0213 20:16:09.095335 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.095475 kubelet[2545]: E0213 20:16:09.095354 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.096051 kubelet[2545]: E0213 20:16:09.095899 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.096051 kubelet[2545]: W0213 20:16:09.095918 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.096051 kubelet[2545]: E0213 20:16:09.095936 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.096805 kubelet[2545]: E0213 20:16:09.096634 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.096805 kubelet[2545]: W0213 20:16:09.096655 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.096805 kubelet[2545]: E0213 20:16:09.096673 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.097544 kubelet[2545]: E0213 20:16:09.097236 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.097544 kubelet[2545]: W0213 20:16:09.097254 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.097544 kubelet[2545]: E0213 20:16:09.097272 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.097987 kubelet[2545]: E0213 20:16:09.097968 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.098171 kubelet[2545]: W0213 20:16:09.098092 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.098171 kubelet[2545]: E0213 20:16:09.098142 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.098524 kubelet[2545]: I0213 20:16:09.098347 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55870946-44e5-4646-b49c-964c3d25ad4a-kubelet-dir\") pod \"csi-node-driver-vtgvf\" (UID: \"55870946-44e5-4646-b49c-964c3d25ad4a\") " pod="calico-system/csi-node-driver-vtgvf" Feb 13 20:16:09.099215 kubelet[2545]: E0213 20:16:09.098973 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.099215 kubelet[2545]: W0213 20:16:09.098993 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.099215 kubelet[2545]: E0213 20:16:09.099019 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.100483 kubelet[2545]: E0213 20:16:09.100239 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.100483 kubelet[2545]: W0213 20:16:09.100257 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.100954 kubelet[2545]: E0213 20:16:09.100696 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.101214 kubelet[2545]: E0213 20:16:09.101193 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.101583 kubelet[2545]: W0213 20:16:09.101354 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.101583 kubelet[2545]: E0213 20:16:09.101382 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.101583 kubelet[2545]: I0213 20:16:09.101417 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/55870946-44e5-4646-b49c-964c3d25ad4a-socket-dir\") pod \"csi-node-driver-vtgvf\" (UID: \"55870946-44e5-4646-b49c-964c3d25ad4a\") " pod="calico-system/csi-node-driver-vtgvf" Feb 13 20:16:09.103024 kubelet[2545]: E0213 20:16:09.102991 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.103212 kubelet[2545]: W0213 20:16:09.103190 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.103567 kubelet[2545]: E0213 20:16:09.103542 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.103776 kubelet[2545]: I0213 20:16:09.103741 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/55870946-44e5-4646-b49c-964c3d25ad4a-registration-dir\") pod \"csi-node-driver-vtgvf\" (UID: \"55870946-44e5-4646-b49c-964c3d25ad4a\") " pod="calico-system/csi-node-driver-vtgvf" Feb 13 20:16:09.104256 kubelet[2545]: E0213 20:16:09.104177 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.104256 kubelet[2545]: W0213 20:16:09.104195 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.104256 kubelet[2545]: E0213 20:16:09.104219 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.105174 kubelet[2545]: E0213 20:16:09.105056 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.105174 kubelet[2545]: W0213 20:16:09.105076 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.105473 kubelet[2545]: E0213 20:16:09.105348 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.106002 kubelet[2545]: E0213 20:16:09.105970 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.106424 kubelet[2545]: W0213 20:16:09.106139 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.106424 kubelet[2545]: E0213 20:16:09.106213 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.106424 kubelet[2545]: I0213 20:16:09.106247 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p4kj\" (UniqueName: \"kubernetes.io/projected/55870946-44e5-4646-b49c-964c3d25ad4a-kube-api-access-7p4kj\") pod \"csi-node-driver-vtgvf\" (UID: \"55870946-44e5-4646-b49c-964c3d25ad4a\") " pod="calico-system/csi-node-driver-vtgvf" Feb 13 20:16:09.107027 kubelet[2545]: E0213 20:16:09.106972 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.107027 kubelet[2545]: W0213 20:16:09.106992 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.108827 kubelet[2545]: E0213 20:16:09.107502 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.108827 kubelet[2545]: I0213 20:16:09.108759 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/55870946-44e5-4646-b49c-964c3d25ad4a-varrun\") pod \"csi-node-driver-vtgvf\" (UID: \"55870946-44e5-4646-b49c-964c3d25ad4a\") " pod="calico-system/csi-node-driver-vtgvf" Feb 13 20:16:09.109082 kubelet[2545]: E0213 20:16:09.109065 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.109234 kubelet[2545]: W0213 20:16:09.109215 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.109455 kubelet[2545]: E0213 20:16:09.109435 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.110209 kubelet[2545]: E0213 20:16:09.110189 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.110440 kubelet[2545]: W0213 20:16:09.110373 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.110869 kubelet[2545]: E0213 20:16:09.110646 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.111338 kubelet[2545]: E0213 20:16:09.111092 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.111338 kubelet[2545]: W0213 20:16:09.111111 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.112905 kubelet[2545]: E0213 20:16:09.111467 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.113311 kubelet[2545]: E0213 20:16:09.113081 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.113311 kubelet[2545]: W0213 20:16:09.113101 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.113311 kubelet[2545]: E0213 20:16:09.113159 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.114041 kubelet[2545]: E0213 20:16:09.113954 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.114041 kubelet[2545]: W0213 20:16:09.113997 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.114041 kubelet[2545]: E0213 20:16:09.114017 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.115057 kubelet[2545]: E0213 20:16:09.114829 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.115057 kubelet[2545]: W0213 20:16:09.114848 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.115057 kubelet[2545]: E0213 20:16:09.114866 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.182572 containerd[1465]: time="2025-02-13T20:16:09.182512007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j4nf5,Uid:ccc30c1d-827a-43a6-86e1-2eae9ab22004,Namespace:calico-system,Attempt:0,}" Feb 13 20:16:09.196733 containerd[1465]: time="2025-02-13T20:16:09.196551063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79d6676c6b-9sp5g,Uid:1f1a2616-5cff-4a31-b66b-b5aec071871c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c2290409456d7cb3c5ce45aa8b004a46f2dd6ef60ec3014df1e8752dccaaa5a\"" Feb 13 20:16:09.202154 containerd[1465]: time="2025-02-13T20:16:09.201608987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:16:09.210768 kubelet[2545]: E0213 20:16:09.210646 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.210768 kubelet[2545]: W0213 20:16:09.210677 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.210768 kubelet[2545]: E0213 20:16:09.210729 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.213429 kubelet[2545]: E0213 20:16:09.213247 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.213429 kubelet[2545]: W0213 20:16:09.213272 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.213429 kubelet[2545]: E0213 20:16:09.213298 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.214013 kubelet[2545]: E0213 20:16:09.213634 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.214013 kubelet[2545]: W0213 20:16:09.213661 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.214013 kubelet[2545]: E0213 20:16:09.213681 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.215327 kubelet[2545]: E0213 20:16:09.215169 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.215327 kubelet[2545]: W0213 20:16:09.215194 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.215327 kubelet[2545]: E0213 20:16:09.215215 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.218005 kubelet[2545]: E0213 20:16:09.217099 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.218005 kubelet[2545]: W0213 20:16:09.217147 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.218005 kubelet[2545]: E0213 20:16:09.217175 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.219748 kubelet[2545]: E0213 20:16:09.219614 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.219748 kubelet[2545]: W0213 20:16:09.219636 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.219748 kubelet[2545]: E0213 20:16:09.219695 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.222321 kubelet[2545]: E0213 20:16:09.221170 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.222321 kubelet[2545]: W0213 20:16:09.221193 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.222321 kubelet[2545]: E0213 20:16:09.221221 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.222321 kubelet[2545]: E0213 20:16:09.221694 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.222321 kubelet[2545]: W0213 20:16:09.221710 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.222321 kubelet[2545]: E0213 20:16:09.221728 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.222321 kubelet[2545]: E0213 20:16:09.222255 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.222321 kubelet[2545]: W0213 20:16:09.222298 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.222321 kubelet[2545]: E0213 20:16:09.222318 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.223292 kubelet[2545]: E0213 20:16:09.223033 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.223292 kubelet[2545]: W0213 20:16:09.223051 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.223292 kubelet[2545]: E0213 20:16:09.223167 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.223747 kubelet[2545]: E0213 20:16:09.223725 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.223927 kubelet[2545]: W0213 20:16:09.223788 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.223927 kubelet[2545]: E0213 20:16:09.223878 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.224685 kubelet[2545]: E0213 20:16:09.224664 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.224685 kubelet[2545]: W0213 20:16:09.224688 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.224685 kubelet[2545]: E0213 20:16:09.224751 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.225787 kubelet[2545]: E0213 20:16:09.225703 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.225898 kubelet[2545]: W0213 20:16:09.225796 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.225898 kubelet[2545]: E0213 20:16:09.225874 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.226611 kubelet[2545]: E0213 20:16:09.226587 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.226611 kubelet[2545]: W0213 20:16:09.226611 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.227095 kubelet[2545]: E0213 20:16:09.226753 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.227539 kubelet[2545]: E0213 20:16:09.227365 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.227539 kubelet[2545]: W0213 20:16:09.227381 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.227539 kubelet[2545]: E0213 20:16:09.227531 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.228626 kubelet[2545]: E0213 20:16:09.228090 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.228626 kubelet[2545]: W0213 20:16:09.228173 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.228626 kubelet[2545]: E0213 20:16:09.228317 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.229719 kubelet[2545]: E0213 20:16:09.228769 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.229719 kubelet[2545]: W0213 20:16:09.228786 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.229719 kubelet[2545]: E0213 20:16:09.228918 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.229719 kubelet[2545]: E0213 20:16:09.229316 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.229719 kubelet[2545]: W0213 20:16:09.229331 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.229719 kubelet[2545]: E0213 20:16:09.229459 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.229719 kubelet[2545]: E0213 20:16:09.229690 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.229719 kubelet[2545]: W0213 20:16:09.229704 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.231532 kubelet[2545]: E0213 20:16:09.229825 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.231532 kubelet[2545]: E0213 20:16:09.230193 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.231532 kubelet[2545]: W0213 20:16:09.230209 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.231532 kubelet[2545]: E0213 20:16:09.230274 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.231532 kubelet[2545]: E0213 20:16:09.230689 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.231532 kubelet[2545]: W0213 20:16:09.230716 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.231532 kubelet[2545]: E0213 20:16:09.230738 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.231532 kubelet[2545]: E0213 20:16:09.231450 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.231532 kubelet[2545]: W0213 20:16:09.231466 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.231532 kubelet[2545]: E0213 20:16:09.231490 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.232048 kubelet[2545]: E0213 20:16:09.231931 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.232048 kubelet[2545]: W0213 20:16:09.231945 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.235402 kubelet[2545]: E0213 20:16:09.232995 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.235402 kubelet[2545]: E0213 20:16:09.233330 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.235402 kubelet[2545]: W0213 20:16:09.233357 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.235402 kubelet[2545]: E0213 20:16:09.233375 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.235402 kubelet[2545]: E0213 20:16:09.233726 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.235402 kubelet[2545]: W0213 20:16:09.233741 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.235402 kubelet[2545]: E0213 20:16:09.233758 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.259084 kubelet[2545]: E0213 20:16:09.259050 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:09.259350 kubelet[2545]: W0213 20:16:09.259322 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:09.259507 kubelet[2545]: E0213 20:16:09.259487 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:09.266766 containerd[1465]: time="2025-02-13T20:16:09.263046831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:09.266766 containerd[1465]: time="2025-02-13T20:16:09.263144396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:09.266766 containerd[1465]: time="2025-02-13T20:16:09.263173354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:09.266766 containerd[1465]: time="2025-02-13T20:16:09.263305341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:09.301112 systemd[1]: Started cri-containerd-167595cbdb0ec7d7149f843c2c1a093e1cf7a92cf531238f1d431a4fad1b281c.scope - libcontainer container 167595cbdb0ec7d7149f843c2c1a093e1cf7a92cf531238f1d431a4fad1b281c. Feb 13 20:16:09.354169 containerd[1465]: time="2025-02-13T20:16:09.354026143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j4nf5,Uid:ccc30c1d-827a-43a6-86e1-2eae9ab22004,Namespace:calico-system,Attempt:0,} returns sandbox id \"167595cbdb0ec7d7149f843c2c1a093e1cf7a92cf531238f1d431a4fad1b281c\"" Feb 13 20:16:10.389254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460996249.mount: Deactivated successfully. Feb 13 20:16:10.789992 kubelet[2545]: E0213 20:16:10.789913 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtgvf" podUID="55870946-44e5-4646-b49c-964c3d25ad4a" Feb 13 20:16:11.240696 containerd[1465]: time="2025-02-13T20:16:11.240630971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:11.241943 containerd[1465]: time="2025-02-13T20:16:11.241868967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 20:16:11.243261 containerd[1465]: time="2025-02-13T20:16:11.243189641Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:11.246004 containerd[1465]: time="2025-02-13T20:16:11.245942826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:11.247183 containerd[1465]: time="2025-02-13T20:16:11.246912981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.045254307s" Feb 13 20:16:11.247183 containerd[1465]: time="2025-02-13T20:16:11.246958091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:16:11.249416 containerd[1465]: time="2025-02-13T20:16:11.249193133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:16:11.268623 containerd[1465]: time="2025-02-13T20:16:11.268427194Z" level=info msg="CreateContainer within sandbox \"1c2290409456d7cb3c5ce45aa8b004a46f2dd6ef60ec3014df1e8752dccaaa5a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:16:11.285294 containerd[1465]: time="2025-02-13T20:16:11.285241206Z" level=info msg="CreateContainer within sandbox \"1c2290409456d7cb3c5ce45aa8b004a46f2dd6ef60ec3014df1e8752dccaaa5a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dee4e368b618cf15ef7a1e37e313a8bed5d347cdbe4255dc741ac2c7bc6c5eaf\"" Feb 13 20:16:11.286397 containerd[1465]: time="2025-02-13T20:16:11.286159486Z" level=info msg="StartContainer for \"dee4e368b618cf15ef7a1e37e313a8bed5d347cdbe4255dc741ac2c7bc6c5eaf\"" Feb 13 20:16:11.336519 systemd[1]: Started cri-containerd-dee4e368b618cf15ef7a1e37e313a8bed5d347cdbe4255dc741ac2c7bc6c5eaf.scope - libcontainer container dee4e368b618cf15ef7a1e37e313a8bed5d347cdbe4255dc741ac2c7bc6c5eaf. Feb 13 20:16:11.396010 containerd[1465]: time="2025-02-13T20:16:11.395830645Z" level=info msg="StartContainer for \"dee4e368b618cf15ef7a1e37e313a8bed5d347cdbe4255dc741ac2c7bc6c5eaf\" returns successfully" Feb 13 20:16:11.915683 kubelet[2545]: E0213 20:16:11.915556 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.915683 kubelet[2545]: W0213 20:16:11.915613 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.917710 kubelet[2545]: E0213 20:16:11.915641 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.917710 kubelet[2545]: E0213 20:16:11.916941 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.917710 kubelet[2545]: W0213 20:16:11.916958 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.917710 kubelet[2545]: E0213 20:16:11.917097 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.918518 kubelet[2545]: E0213 20:16:11.918219 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.918518 kubelet[2545]: W0213 20:16:11.918269 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.918518 kubelet[2545]: E0213 20:16:11.918289 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.919228 kubelet[2545]: E0213 20:16:11.919010 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.919228 kubelet[2545]: W0213 20:16:11.919046 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.919228 kubelet[2545]: E0213 20:16:11.919066 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.919895 kubelet[2545]: E0213 20:16:11.919750 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.919895 kubelet[2545]: W0213 20:16:11.919768 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.919895 kubelet[2545]: E0213 20:16:11.919786 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.920620 kubelet[2545]: E0213 20:16:11.920485 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.920620 kubelet[2545]: W0213 20:16:11.920502 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.920620 kubelet[2545]: E0213 20:16:11.920544 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.921200 kubelet[2545]: E0213 20:16:11.921173 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.921200 kubelet[2545]: W0213 20:16:11.921194 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.921355 kubelet[2545]: E0213 20:16:11.921212 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.921607 kubelet[2545]: E0213 20:16:11.921585 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.921607 kubelet[2545]: W0213 20:16:11.921604 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.921781 kubelet[2545]: E0213 20:16:11.921625 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.921969 kubelet[2545]: E0213 20:16:11.921949 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.921969 kubelet[2545]: W0213 20:16:11.921967 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.922113 kubelet[2545]: E0213 20:16:11.921984 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.922345 kubelet[2545]: E0213 20:16:11.922322 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.922345 kubelet[2545]: W0213 20:16:11.922341 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.922462 kubelet[2545]: E0213 20:16:11.922358 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.922702 kubelet[2545]: E0213 20:16:11.922680 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.922702 kubelet[2545]: W0213 20:16:11.922699 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.922862 kubelet[2545]: E0213 20:16:11.922716 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.923013 kubelet[2545]: E0213 20:16:11.922995 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.923013 kubelet[2545]: W0213 20:16:11.923012 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.923191 kubelet[2545]: E0213 20:16:11.923028 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.923349 kubelet[2545]: E0213 20:16:11.923331 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.923349 kubelet[2545]: W0213 20:16:11.923347 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.923550 kubelet[2545]: E0213 20:16:11.923363 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.924093 kubelet[2545]: E0213 20:16:11.924056 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.924093 kubelet[2545]: W0213 20:16:11.924081 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.924271 kubelet[2545]: E0213 20:16:11.924098 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.924778 kubelet[2545]: E0213 20:16:11.924748 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.924778 kubelet[2545]: W0213 20:16:11.924774 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.924924 kubelet[2545]: E0213 20:16:11.924793 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.938899 kubelet[2545]: E0213 20:16:11.938789 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.938899 kubelet[2545]: W0213 20:16:11.938815 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.938899 kubelet[2545]: E0213 20:16:11.938837 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.939256 kubelet[2545]: E0213 20:16:11.939205 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.939256 kubelet[2545]: W0213 20:16:11.939219 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.939256 kubelet[2545]: E0213 20:16:11.939242 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.939611 kubelet[2545]: E0213 20:16:11.939590 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.939611 kubelet[2545]: W0213 20:16:11.939608 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.939795 kubelet[2545]: E0213 20:16:11.939630 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.939976 kubelet[2545]: E0213 20:16:11.939956 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.939976 kubelet[2545]: W0213 20:16:11.939973 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.940111 kubelet[2545]: E0213 20:16:11.940001 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.940352 kubelet[2545]: E0213 20:16:11.940331 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.940352 kubelet[2545]: W0213 20:16:11.940348 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.940526 kubelet[2545]: E0213 20:16:11.940375 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.940668 kubelet[2545]: E0213 20:16:11.940649 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.940801 kubelet[2545]: W0213 20:16:11.940666 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.940801 kubelet[2545]: E0213 20:16:11.940746 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.940966 kubelet[2545]: E0213 20:16:11.940944 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.940966 kubelet[2545]: W0213 20:16:11.940961 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.941162 kubelet[2545]: E0213 20:16:11.941036 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.941290 kubelet[2545]: E0213 20:16:11.941276 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.941290 kubelet[2545]: W0213 20:16:11.941288 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.941473 kubelet[2545]: E0213 20:16:11.941319 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.941613 kubelet[2545]: E0213 20:16:11.941595 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.941613 kubelet[2545]: W0213 20:16:11.941612 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.941723 kubelet[2545]: E0213 20:16:11.941634 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.942103 kubelet[2545]: E0213 20:16:11.942080 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.942103 kubelet[2545]: W0213 20:16:11.942099 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.942288 kubelet[2545]: E0213 20:16:11.942149 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.942470 kubelet[2545]: E0213 20:16:11.942442 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.942470 kubelet[2545]: W0213 20:16:11.942460 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.942666 kubelet[2545]: E0213 20:16:11.942541 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.942772 kubelet[2545]: E0213 20:16:11.942748 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.942772 kubelet[2545]: W0213 20:16:11.942761 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.942949 kubelet[2545]: E0213 20:16:11.942800 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.943057 kubelet[2545]: E0213 20:16:11.943035 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.943057 kubelet[2545]: W0213 20:16:11.943050 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.943245 kubelet[2545]: E0213 20:16:11.943075 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.943487 kubelet[2545]: E0213 20:16:11.943465 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.943487 kubelet[2545]: W0213 20:16:11.943484 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.943614 kubelet[2545]: E0213 20:16:11.943508 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.944004 kubelet[2545]: E0213 20:16:11.943983 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.944004 kubelet[2545]: W0213 20:16:11.944001 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.944177 kubelet[2545]: E0213 20:16:11.944036 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.944452 kubelet[2545]: E0213 20:16:11.944431 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.944452 kubelet[2545]: W0213 20:16:11.944450 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.944600 kubelet[2545]: E0213 20:16:11.944474 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.945259 kubelet[2545]: E0213 20:16:11.945059 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.945259 kubelet[2545]: W0213 20:16:11.945076 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.945259 kubelet[2545]: E0213 20:16:11.945105 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:11.945729 kubelet[2545]: E0213 20:16:11.945698 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:16:11.945729 kubelet[2545]: W0213 20:16:11.945712 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:16:11.945729 kubelet[2545]: E0213 20:16:11.945728 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:16:12.369160 containerd[1465]: time="2025-02-13T20:16:12.369084741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:12.370276 containerd[1465]: time="2025-02-13T20:16:12.370203407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 20:16:12.371303 containerd[1465]: time="2025-02-13T20:16:12.371235207Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:12.373984 containerd[1465]: time="2025-02-13T20:16:12.373922275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:12.375014 containerd[1465]: time="2025-02-13T20:16:12.374968993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.125737817s" Feb 13 20:16:12.375151 containerd[1465]: time="2025-02-13T20:16:12.375019872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:16:12.378106 containerd[1465]: time="2025-02-13T20:16:12.378011601Z" level=info msg="CreateContainer within sandbox \"167595cbdb0ec7d7149f843c2c1a093e1cf7a92cf531238f1d431a4fad1b281c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:16:12.395486 containerd[1465]: time="2025-02-13T20:16:12.395433121Z" level=info msg="CreateContainer within sandbox \"167595cbdb0ec7d7149f843c2c1a093e1cf7a92cf531238f1d431a4fad1b281c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"919dcc57dbbbe6b7aabb3263568b0497d6f4a8c750145e492dea8a7e92133e26\"" Feb 13 20:16:12.398987 containerd[1465]: time="2025-02-13T20:16:12.398825259Z" level=info msg="StartContainer for \"919dcc57dbbbe6b7aabb3263568b0497d6f4a8c750145e492dea8a7e92133e26\"" Feb 13 20:16:12.480349 systemd[1]: Started cri-containerd-919dcc57dbbbe6b7aabb3263568b0497d6f4a8c750145e492dea8a7e92133e26.scope - libcontainer container 919dcc57dbbbe6b7aabb3263568b0497d6f4a8c750145e492dea8a7e92133e26. Feb 13 20:16:12.524569 containerd[1465]: time="2025-02-13T20:16:12.524503095Z" level=info msg="StartContainer for \"919dcc57dbbbe6b7aabb3263568b0497d6f4a8c750145e492dea8a7e92133e26\" returns successfully" Feb 13 20:16:12.544234 systemd[1]: cri-containerd-919dcc57dbbbe6b7aabb3263568b0497d6f4a8c750145e492dea8a7e92133e26.scope: Deactivated successfully. Feb 13 20:16:12.581360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-919dcc57dbbbe6b7aabb3263568b0497d6f4a8c750145e492dea8a7e92133e26-rootfs.mount: Deactivated successfully. Feb 13 20:16:12.789993 kubelet[2545]: E0213 20:16:12.788772 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtgvf" podUID="55870946-44e5-4646-b49c-964c3d25ad4a" Feb 13 20:16:12.915015 kubelet[2545]: I0213 20:16:12.914523 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:12.932959 kubelet[2545]: I0213 20:16:12.932883 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79d6676c6b-9sp5g" podStartSLOduration=2.885226768 podStartE2EDuration="4.932856056s" podCreationTimestamp="2025-02-13 20:16:08 +0000 UTC" firstStartedPulling="2025-02-13 20:16:09.200588771 +0000 UTC m=+13.534638965" lastFinishedPulling="2025-02-13 20:16:11.248218054 +0000 UTC m=+15.582268253" observedRunningTime="2025-02-13 20:16:11.927069169 +0000 UTC m=+16.261119379" watchObservedRunningTime="2025-02-13 20:16:12.932856056 +0000 UTC m=+17.266906275" Feb 13 20:16:13.250906 containerd[1465]: time="2025-02-13T20:16:13.250815123Z" level=info msg="shim disconnected" id=919dcc57dbbbe6b7aabb3263568b0497d6f4a8c750145e492dea8a7e92133e26 namespace=k8s.io Feb 13 20:16:13.250906 containerd[1465]: time="2025-02-13T20:16:13.250907340Z" level=warning msg="cleaning up after shim disconnected" id=919dcc57dbbbe6b7aabb3263568b0497d6f4a8c750145e492dea8a7e92133e26 namespace=k8s.io Feb 13 20:16:13.250906 containerd[1465]: time="2025-02-13T20:16:13.250920955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:16:13.920426 containerd[1465]: time="2025-02-13T20:16:13.920063305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:16:14.788712 kubelet[2545]: E0213 20:16:14.788655 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtgvf" podUID="55870946-44e5-4646-b49c-964c3d25ad4a" Feb 13 20:16:16.788932 kubelet[2545]: E0213 20:16:16.788878 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtgvf" podUID="55870946-44e5-4646-b49c-964c3d25ad4a" Feb 13 20:16:18.197428 containerd[1465]: time="2025-02-13T20:16:18.197362367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:18.198659 containerd[1465]: time="2025-02-13T20:16:18.198587100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:16:18.200026 containerd[1465]: time="2025-02-13T20:16:18.199951298Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:18.202871 containerd[1465]: time="2025-02-13T20:16:18.202796498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:18.203944 containerd[1465]: time="2025-02-13T20:16:18.203791347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.28367567s" Feb 13 20:16:18.203944 containerd[1465]: time="2025-02-13T20:16:18.203838237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:16:18.207715 containerd[1465]: time="2025-02-13T20:16:18.207618718Z" level=info msg="CreateContainer within sandbox \"167595cbdb0ec7d7149f843c2c1a093e1cf7a92cf531238f1d431a4fad1b281c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:16:18.225782 containerd[1465]: time="2025-02-13T20:16:18.225724602Z" level=info msg="CreateContainer within sandbox \"167595cbdb0ec7d7149f843c2c1a093e1cf7a92cf531238f1d431a4fad1b281c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"50887f3b1dbab4ee9fb5f50f2fb832dc96a69a296c41b3ac0b270d1aac0d999d\"" Feb 13 20:16:18.228167 containerd[1465]: time="2025-02-13T20:16:18.226561229Z" level=info msg="StartContainer for \"50887f3b1dbab4ee9fb5f50f2fb832dc96a69a296c41b3ac0b270d1aac0d999d\"" Feb 13 20:16:18.274354 systemd[1]: Started cri-containerd-50887f3b1dbab4ee9fb5f50f2fb832dc96a69a296c41b3ac0b270d1aac0d999d.scope - libcontainer container 50887f3b1dbab4ee9fb5f50f2fb832dc96a69a296c41b3ac0b270d1aac0d999d. Feb 13 20:16:18.315823 containerd[1465]: time="2025-02-13T20:16:18.315752795Z" level=info msg="StartContainer for \"50887f3b1dbab4ee9fb5f50f2fb832dc96a69a296c41b3ac0b270d1aac0d999d\" returns successfully" Feb 13 20:16:18.789780 kubelet[2545]: E0213 20:16:18.789701 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtgvf" podUID="55870946-44e5-4646-b49c-964c3d25ad4a" Feb 13 20:16:19.164194 systemd[1]: cri-containerd-50887f3b1dbab4ee9fb5f50f2fb832dc96a69a296c41b3ac0b270d1aac0d999d.scope: Deactivated successfully. Feb 13 20:16:19.197635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50887f3b1dbab4ee9fb5f50f2fb832dc96a69a296c41b3ac0b270d1aac0d999d-rootfs.mount: Deactivated successfully. Feb 13 20:16:19.244084 kubelet[2545]: I0213 20:16:19.242336 2545 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 20:16:19.298043 kubelet[2545]: I0213 20:16:19.297743 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c63b30a8-8e62-4267-9609-912d1a8617c5-calico-apiserver-certs\") pod \"calico-apiserver-645b8d968-55plt\" (UID: \"c63b30a8-8e62-4267-9609-912d1a8617c5\") " pod="calico-apiserver/calico-apiserver-645b8d968-55plt" Feb 13 20:16:19.298043 kubelet[2545]: I0213 20:16:19.297793 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndpx8\" (UniqueName: \"kubernetes.io/projected/061fba9c-316a-4909-a848-0cb5a7c86a19-kube-api-access-ndpx8\") pod \"calico-apiserver-645b8d968-sl4b8\" (UID: \"061fba9c-316a-4909-a848-0cb5a7c86a19\") " pod="calico-apiserver/calico-apiserver-645b8d968-sl4b8" Feb 13 20:16:19.298043 kubelet[2545]: I0213 20:16:19.297942 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpkfz\" (UniqueName: \"kubernetes.io/projected/ab60a360-887a-466f-9f36-830c771a9b75-kube-api-access-bpkfz\") pod \"coredns-6f6b679f8f-v9hwk\" (UID: \"ab60a360-887a-466f-9f36-830c771a9b75\") " pod="kube-system/coredns-6f6b679f8f-v9hwk" Feb 13 20:16:19.298043 kubelet[2545]: I0213 20:16:19.297996 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/061fba9c-316a-4909-a848-0cb5a7c86a19-calico-apiserver-certs\") pod \"calico-apiserver-645b8d968-sl4b8\" (UID: \"061fba9c-316a-4909-a848-0cb5a7c86a19\") " pod="calico-apiserver/calico-apiserver-645b8d968-sl4b8" Feb 13 20:16:19.298043 kubelet[2545]: I0213 20:16:19.298031 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab60a360-887a-466f-9f36-830c771a9b75-config-volume\") pod \"coredns-6f6b679f8f-v9hwk\" (UID: \"ab60a360-887a-466f-9f36-830c771a9b75\") " pod="kube-system/coredns-6f6b679f8f-v9hwk" Feb 13 20:16:19.298493 kubelet[2545]: I0213 20:16:19.298155 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kj92\" (UniqueName: \"kubernetes.io/projected/c63b30a8-8e62-4267-9609-912d1a8617c5-kube-api-access-6kj92\") pod \"calico-apiserver-645b8d968-55plt\" (UID: \"c63b30a8-8e62-4267-9609-912d1a8617c5\") " pod="calico-apiserver/calico-apiserver-645b8d968-55plt" Feb 13 20:16:19.307475 systemd[1]: Created slice kubepods-burstable-podab60a360_887a_466f_9f36_830c771a9b75.slice - libcontainer container kubepods-burstable-podab60a360_887a_466f_9f36_830c771a9b75.slice. Feb 13 20:16:19.329488 systemd[1]: Created slice kubepods-besteffort-podc63b30a8_8e62_4267_9609_912d1a8617c5.slice - libcontainer container kubepods-besteffort-podc63b30a8_8e62_4267_9609_912d1a8617c5.slice. Feb 13 20:16:19.341805 systemd[1]: Created slice kubepods-besteffort-pod061fba9c_316a_4909_a848_0cb5a7c86a19.slice - libcontainer container kubepods-besteffort-pod061fba9c_316a_4909_a848_0cb5a7c86a19.slice. Feb 13 20:16:19.357428 systemd[1]: Created slice kubepods-burstable-podbf8b9894_04eb_4f05_8268_01b34a155c39.slice - libcontainer container kubepods-burstable-podbf8b9894_04eb_4f05_8268_01b34a155c39.slice. Feb 13 20:16:19.369294 systemd[1]: Created slice kubepods-besteffort-podc79cadd8_8457_48ba_9385_1ff5bfefcfc8.slice - libcontainer container kubepods-besteffort-podc79cadd8_8457_48ba_9385_1ff5bfefcfc8.slice. Feb 13 20:16:19.402384 kubelet[2545]: I0213 20:16:19.399207 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf8b9894-04eb-4f05-8268-01b34a155c39-config-volume\") pod \"coredns-6f6b679f8f-pbzwk\" (UID: \"bf8b9894-04eb-4f05-8268-01b34a155c39\") " pod="kube-system/coredns-6f6b679f8f-pbzwk" Feb 13 20:16:19.409109 kubelet[2545]: I0213 20:16:19.399359 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj4qn\" (UniqueName: \"kubernetes.io/projected/c79cadd8-8457-48ba-9385-1ff5bfefcfc8-kube-api-access-xj4qn\") pod \"calico-kube-controllers-658869675d-mqtbl\" (UID: \"c79cadd8-8457-48ba-9385-1ff5bfefcfc8\") " pod="calico-system/calico-kube-controllers-658869675d-mqtbl" Feb 13 20:16:19.409297 kubelet[2545]: I0213 20:16:19.409194 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c79cadd8-8457-48ba-9385-1ff5bfefcfc8-tigera-ca-bundle\") pod \"calico-kube-controllers-658869675d-mqtbl\" (UID: \"c79cadd8-8457-48ba-9385-1ff5bfefcfc8\") " pod="calico-system/calico-kube-controllers-658869675d-mqtbl" Feb 13 20:16:19.409369 kubelet[2545]: I0213 20:16:19.409292 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxfws\" (UniqueName: \"kubernetes.io/projected/bf8b9894-04eb-4f05-8268-01b34a155c39-kube-api-access-zxfws\") pod \"coredns-6f6b679f8f-pbzwk\" (UID: \"bf8b9894-04eb-4f05-8268-01b34a155c39\") " pod="kube-system/coredns-6f6b679f8f-pbzwk" Feb 13 20:16:19.620577 containerd[1465]: time="2025-02-13T20:16:19.620517315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9hwk,Uid:ab60a360-887a-466f-9f36-830c771a9b75,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:19.641750 containerd[1465]: time="2025-02-13T20:16:19.641693567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-645b8d968-55plt,Uid:c63b30a8-8e62-4267-9609-912d1a8617c5,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:16:19.713551 containerd[1465]: time="2025-02-13T20:16:19.712975713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-645b8d968-sl4b8,Uid:061fba9c-316a-4909-a848-0cb5a7c86a19,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:16:19.722601 containerd[1465]: time="2025-02-13T20:16:19.722558368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pbzwk,Uid:bf8b9894-04eb-4f05-8268-01b34a155c39,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:19.722884 containerd[1465]: time="2025-02-13T20:16:19.722566059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658869675d-mqtbl,Uid:c79cadd8-8457-48ba-9385-1ff5bfefcfc8,Namespace:calico-system,Attempt:0,}" Feb 13 20:16:19.970534 containerd[1465]: time="2025-02-13T20:16:19.970177428Z" level=info msg="shim disconnected" id=50887f3b1dbab4ee9fb5f50f2fb832dc96a69a296c41b3ac0b270d1aac0d999d namespace=k8s.io Feb 13 20:16:19.970534 containerd[1465]: time="2025-02-13T20:16:19.970246379Z" level=warning msg="cleaning up after shim disconnected" id=50887f3b1dbab4ee9fb5f50f2fb832dc96a69a296c41b3ac0b270d1aac0d999d namespace=k8s.io Feb 13 20:16:19.970534 containerd[1465]: time="2025-02-13T20:16:19.970272000Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:16:20.220438 containerd[1465]: time="2025-02-13T20:16:20.220362780Z" level=error msg="Failed to destroy network for sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.222149 containerd[1465]: time="2025-02-13T20:16:20.220978597Z" level=error msg="encountered an error cleaning up failed sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.222149 containerd[1465]: time="2025-02-13T20:16:20.221088471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pbzwk,Uid:bf8b9894-04eb-4f05-8268-01b34a155c39,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.222347 kubelet[2545]: E0213 20:16:20.221454 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.222347 kubelet[2545]: E0213 20:16:20.221559 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pbzwk" Feb 13 20:16:20.222347 kubelet[2545]: E0213 20:16:20.221591 2545 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pbzwk" Feb 13 20:16:20.222347 kubelet[2545]: E0213 20:16:20.221663 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-pbzwk_kube-system(bf8b9894-04eb-4f05-8268-01b34a155c39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-pbzwk_kube-system(bf8b9894-04eb-4f05-8268-01b34a155c39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pbzwk" podUID="bf8b9894-04eb-4f05-8268-01b34a155c39" Feb 13 20:16:20.246809 containerd[1465]: time="2025-02-13T20:16:20.245293990Z" level=error msg="Failed to destroy network for sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.247445 containerd[1465]: time="2025-02-13T20:16:20.247372575Z" level=error msg="encountered an error cleaning up failed sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.247757 containerd[1465]: time="2025-02-13T20:16:20.247687916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-645b8d968-sl4b8,Uid:061fba9c-316a-4909-a848-0cb5a7c86a19,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.248430 kubelet[2545]: E0213 20:16:20.248225 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.248430 kubelet[2545]: E0213 20:16:20.248306 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-645b8d968-sl4b8" Feb 13 20:16:20.248430 kubelet[2545]: E0213 20:16:20.248339 2545 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-645b8d968-sl4b8" Feb 13 20:16:20.248430 kubelet[2545]: E0213 20:16:20.248403 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-645b8d968-sl4b8_calico-apiserver(061fba9c-316a-4909-a848-0cb5a7c86a19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-645b8d968-sl4b8_calico-apiserver(061fba9c-316a-4909-a848-0cb5a7c86a19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-645b8d968-sl4b8" podUID="061fba9c-316a-4909-a848-0cb5a7c86a19" Feb 13 20:16:20.257753 containerd[1465]: time="2025-02-13T20:16:20.257704246Z" level=error msg="Failed to destroy network for sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.258628 containerd[1465]: time="2025-02-13T20:16:20.258488367Z" level=error msg="encountered an error cleaning up failed sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.258628 containerd[1465]: time="2025-02-13T20:16:20.258574347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9hwk,Uid:ab60a360-887a-466f-9f36-830c771a9b75,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.260489 kubelet[2545]: E0213 20:16:20.258844 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.260489 kubelet[2545]: E0213 20:16:20.258908 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9hwk" Feb 13 20:16:20.260489 kubelet[2545]: E0213 20:16:20.258939 2545 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9hwk" Feb 13 20:16:20.260489 kubelet[2545]: E0213 20:16:20.258992 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v9hwk_kube-system(ab60a360-887a-466f-9f36-830c771a9b75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v9hwk_kube-system(ab60a360-887a-466f-9f36-830c771a9b75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v9hwk" podUID="ab60a360-887a-466f-9f36-830c771a9b75" Feb 13 20:16:20.262000 containerd[1465]: time="2025-02-13T20:16:20.261728707Z" level=error msg="Failed to destroy network for sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.262404 containerd[1465]: time="2025-02-13T20:16:20.262241819Z" level=error msg="encountered an error cleaning up failed sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.262404 containerd[1465]: time="2025-02-13T20:16:20.262312314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658869675d-mqtbl,Uid:c79cadd8-8457-48ba-9385-1ff5bfefcfc8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.262928 kubelet[2545]: E0213 20:16:20.262707 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.262928 kubelet[2545]: E0213 20:16:20.262767 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658869675d-mqtbl" Feb 13 20:16:20.262928 kubelet[2545]: E0213 20:16:20.262799 2545 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658869675d-mqtbl" Feb 13 20:16:20.262928 kubelet[2545]: E0213 20:16:20.262851 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-658869675d-mqtbl_calico-system(c79cadd8-8457-48ba-9385-1ff5bfefcfc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-658869675d-mqtbl_calico-system(c79cadd8-8457-48ba-9385-1ff5bfefcfc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-658869675d-mqtbl" podUID="c79cadd8-8457-48ba-9385-1ff5bfefcfc8" Feb 13 20:16:20.274864 containerd[1465]: time="2025-02-13T20:16:20.274821719Z" level=error msg="Failed to destroy network for sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.275284 containerd[1465]: time="2025-02-13T20:16:20.275230023Z" level=error msg="encountered an error cleaning up failed sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.275520 containerd[1465]: time="2025-02-13T20:16:20.275301360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-645b8d968-55plt,Uid:c63b30a8-8e62-4267-9609-912d1a8617c5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.275672 kubelet[2545]: E0213 20:16:20.275617 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.275756 kubelet[2545]: E0213 20:16:20.275718 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-645b8d968-55plt" Feb 13 20:16:20.275854 kubelet[2545]: E0213 20:16:20.275773 2545 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-645b8d968-55plt" Feb 13 20:16:20.275998 kubelet[2545]: E0213 20:16:20.275858 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-645b8d968-55plt_calico-apiserver(c63b30a8-8e62-4267-9609-912d1a8617c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-645b8d968-55plt_calico-apiserver(c63b30a8-8e62-4267-9609-912d1a8617c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-645b8d968-55plt" podUID="c63b30a8-8e62-4267-9609-912d1a8617c5" Feb 13 20:16:20.796064 systemd[1]: Created slice kubepods-besteffort-pod55870946_44e5_4646_b49c_964c3d25ad4a.slice - libcontainer container kubepods-besteffort-pod55870946_44e5_4646_b49c_964c3d25ad4a.slice. Feb 13 20:16:20.799172 containerd[1465]: time="2025-02-13T20:16:20.799097589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vtgvf,Uid:55870946-44e5-4646-b49c-964c3d25ad4a,Namespace:calico-system,Attempt:0,}" Feb 13 20:16:20.880046 containerd[1465]: time="2025-02-13T20:16:20.879978677Z" level=error msg="Failed to destroy network for sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.881368 containerd[1465]: time="2025-02-13T20:16:20.881293994Z" level=error msg="encountered an error cleaning up failed sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.881578 containerd[1465]: time="2025-02-13T20:16:20.881394530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vtgvf,Uid:55870946-44e5-4646-b49c-964c3d25ad4a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.881777 kubelet[2545]: E0213 20:16:20.881734 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:20.881945 kubelet[2545]: E0213 20:16:20.881814 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vtgvf" Feb 13 20:16:20.881945 kubelet[2545]: E0213 20:16:20.881853 2545 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vtgvf" Feb 13 20:16:20.881945 kubelet[2545]: E0213 20:16:20.881914 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vtgvf_calico-system(55870946-44e5-4646-b49c-964c3d25ad4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vtgvf_calico-system(55870946-44e5-4646-b49c-964c3d25ad4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vtgvf" podUID="55870946-44e5-4646-b49c-964c3d25ad4a" Feb 13 20:16:20.885269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc-shm.mount: Deactivated successfully. Feb 13 20:16:20.942991 kubelet[2545]: I0213 20:16:20.942930 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:20.944141 containerd[1465]: time="2025-02-13T20:16:20.944081279Z" level=info msg="StopPodSandbox for \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\"" Feb 13 20:16:20.944404 containerd[1465]: time="2025-02-13T20:16:20.944355381Z" level=info msg="Ensure that sandbox 19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f in task-service has been cleanup successfully" Feb 13 20:16:20.955073 containerd[1465]: time="2025-02-13T20:16:20.953220373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:16:20.957238 kubelet[2545]: I0213 20:16:20.957165 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:20.960804 containerd[1465]: time="2025-02-13T20:16:20.960731729Z" level=info msg="StopPodSandbox for \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\"" Feb 13 20:16:20.962620 containerd[1465]: time="2025-02-13T20:16:20.962559201Z" level=info msg="Ensure that sandbox 95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc in task-service has been cleanup successfully" Feb 13 20:16:20.963628 kubelet[2545]: I0213 20:16:20.963562 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:20.965066 containerd[1465]: time="2025-02-13T20:16:20.964846627Z" level=info msg="StopPodSandbox for \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\"" Feb 13 20:16:20.968325 containerd[1465]: time="2025-02-13T20:16:20.967978825Z" level=info msg="Ensure that sandbox 61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9 in task-service has been cleanup successfully" Feb 13 20:16:20.971394 kubelet[2545]: I0213 20:16:20.971366 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:20.974893 containerd[1465]: time="2025-02-13T20:16:20.974356210Z" level=info msg="StopPodSandbox for \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\"" Feb 13 20:16:20.974893 containerd[1465]: time="2025-02-13T20:16:20.974576653Z" level=info msg="Ensure that sandbox 748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d in task-service has been cleanup successfully" Feb 13 20:16:20.981523 kubelet[2545]: I0213 20:16:20.981491 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:20.983392 containerd[1465]: time="2025-02-13T20:16:20.983358510Z" level=info msg="StopPodSandbox for \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\"" Feb 13 20:16:20.983769 containerd[1465]: time="2025-02-13T20:16:20.983740950Z" level=info msg="Ensure that sandbox 172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc in task-service has been cleanup successfully" Feb 13 20:16:20.992761 kubelet[2545]: I0213 20:16:20.992733 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:20.996089 containerd[1465]: time="2025-02-13T20:16:20.995100634Z" level=info msg="StopPodSandbox for \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\"" Feb 13 20:16:20.996089 containerd[1465]: time="2025-02-13T20:16:20.995499453Z" level=info msg="Ensure that sandbox d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654 in task-service has been cleanup successfully" Feb 13 20:16:21.098638 containerd[1465]: time="2025-02-13T20:16:21.098455019Z" level=error msg="StopPodSandbox for \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\" failed" error="failed to destroy network for sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:21.100837 kubelet[2545]: E0213 20:16:21.098883 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:21.100837 kubelet[2545]: E0213 20:16:21.098956 2545 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9"} Feb 13 20:16:21.100837 kubelet[2545]: E0213 20:16:21.099060 2545 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c63b30a8-8e62-4267-9609-912d1a8617c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:21.100837 kubelet[2545]: E0213 20:16:21.099099 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c63b30a8-8e62-4267-9609-912d1a8617c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-645b8d968-55plt" podUID="c63b30a8-8e62-4267-9609-912d1a8617c5" Feb 13 20:16:21.129153 containerd[1465]: time="2025-02-13T20:16:21.126917899Z" level=error msg="StopPodSandbox for \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\" failed" error="failed to destroy network for sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:21.129340 kubelet[2545]: E0213 20:16:21.127477 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:21.129340 kubelet[2545]: E0213 20:16:21.127755 2545 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654"} Feb 13 20:16:21.129340 kubelet[2545]: E0213 20:16:21.127838 2545 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf8b9894-04eb-4f05-8268-01b34a155c39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:21.129340 kubelet[2545]: E0213 20:16:21.127878 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf8b9894-04eb-4f05-8268-01b34a155c39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pbzwk" podUID="bf8b9894-04eb-4f05-8268-01b34a155c39" Feb 13 20:16:21.139331 containerd[1465]: time="2025-02-13T20:16:21.139271681Z" level=error msg="StopPodSandbox for \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\" failed" error="failed to destroy network for sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:21.139741 containerd[1465]: time="2025-02-13T20:16:21.139702118Z" level=error msg="StopPodSandbox for \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\" failed" error="failed to destroy network for sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:21.139848 kubelet[2545]: E0213 20:16:21.139783 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:21.139927 kubelet[2545]: E0213 20:16:21.139842 2545 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc"} Feb 13 20:16:21.140789 kubelet[2545]: E0213 20:16:21.140727 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:21.140886 kubelet[2545]: E0213 20:16:21.140798 2545 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f"} Feb 13 20:16:21.140957 kubelet[2545]: E0213 20:16:21.140921 2545 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"061fba9c-316a-4909-a848-0cb5a7c86a19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:21.141076 kubelet[2545]: E0213 20:16:21.140984 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"061fba9c-316a-4909-a848-0cb5a7c86a19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-645b8d968-sl4b8" podUID="061fba9c-316a-4909-a848-0cb5a7c86a19" Feb 13 20:16:21.141076 kubelet[2545]: E0213 20:16:21.141051 2545 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55870946-44e5-4646-b49c-964c3d25ad4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:21.141307 kubelet[2545]: E0213 20:16:21.141085 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55870946-44e5-4646-b49c-964c3d25ad4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vtgvf" podUID="55870946-44e5-4646-b49c-964c3d25ad4a" Feb 13 20:16:21.158749 containerd[1465]: time="2025-02-13T20:16:21.158692434Z" level=error msg="StopPodSandbox for \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\" failed" error="failed to destroy network for sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:21.159316 kubelet[2545]: E0213 20:16:21.159254 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:21.159566 containerd[1465]: time="2025-02-13T20:16:21.159520836Z" level=error msg="StopPodSandbox for \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\" failed" error="failed to destroy network for sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:21.159670 kubelet[2545]: E0213 20:16:21.159570 2545 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc"} Feb 13 20:16:21.159750 kubelet[2545]: E0213 20:16:21.159727 2545 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c79cadd8-8457-48ba-9385-1ff5bfefcfc8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:21.159865 kubelet[2545]: E0213 20:16:21.159806 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c79cadd8-8457-48ba-9385-1ff5bfefcfc8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-658869675d-mqtbl" podUID="c79cadd8-8457-48ba-9385-1ff5bfefcfc8" Feb 13 20:16:21.160296 kubelet[2545]: E0213 20:16:21.160257 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:21.160402 kubelet[2545]: E0213 20:16:21.160304 2545 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d"} Feb 13 20:16:21.160402 kubelet[2545]: E0213 20:16:21.160383 2545 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab60a360-887a-466f-9f36-830c771a9b75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:21.160573 kubelet[2545]: E0213 20:16:21.160444 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab60a360-887a-466f-9f36-830c771a9b75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v9hwk" podUID="ab60a360-887a-466f-9f36-830c771a9b75" Feb 13 20:16:27.543395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1112420747.mount: Deactivated successfully. Feb 13 20:16:27.586812 containerd[1465]: time="2025-02-13T20:16:27.586720098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:27.588146 containerd[1465]: time="2025-02-13T20:16:27.588051332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:16:27.589559 containerd[1465]: time="2025-02-13T20:16:27.589481635Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:27.592293 containerd[1465]: time="2025-02-13T20:16:27.592224108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:27.593272 containerd[1465]: time="2025-02-13T20:16:27.593063300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.639791764s" Feb 13 20:16:27.593272 containerd[1465]: time="2025-02-13T20:16:27.593113705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:16:27.612501 containerd[1465]: time="2025-02-13T20:16:27.611656057Z" level=info msg="CreateContainer within sandbox \"167595cbdb0ec7d7149f843c2c1a093e1cf7a92cf531238f1d431a4fad1b281c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:16:27.640040 containerd[1465]: time="2025-02-13T20:16:27.639966652Z" level=info msg="CreateContainer within sandbox \"167595cbdb0ec7d7149f843c2c1a093e1cf7a92cf531238f1d431a4fad1b281c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f6f5b3cb2660c1417b8ee5ee7eef259af676b1bce47272e82cc5f307596a5e16\"" Feb 13 20:16:27.642169 containerd[1465]: time="2025-02-13T20:16:27.640944683Z" level=info msg="StartContainer for \"f6f5b3cb2660c1417b8ee5ee7eef259af676b1bce47272e82cc5f307596a5e16\"" Feb 13 20:16:27.682355 systemd[1]: Started cri-containerd-f6f5b3cb2660c1417b8ee5ee7eef259af676b1bce47272e82cc5f307596a5e16.scope - libcontainer container f6f5b3cb2660c1417b8ee5ee7eef259af676b1bce47272e82cc5f307596a5e16. Feb 13 20:16:27.730868 containerd[1465]: time="2025-02-13T20:16:27.730175729Z" level=info msg="StartContainer for \"f6f5b3cb2660c1417b8ee5ee7eef259af676b1bce47272e82cc5f307596a5e16\" returns successfully" Feb 13 20:16:27.834825 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:16:27.834989 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:16:31.791361 containerd[1465]: time="2025-02-13T20:16:31.790850502Z" level=info msg="StopPodSandbox for \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\"" Feb 13 20:16:31.851410 kubelet[2545]: I0213 20:16:31.851170 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j4nf5" podStartSLOduration=5.614324559 podStartE2EDuration="23.85114552s" podCreationTimestamp="2025-02-13 20:16:08 +0000 UTC" firstStartedPulling="2025-02-13 20:16:09.357567281 +0000 UTC m=+13.691617476" lastFinishedPulling="2025-02-13 20:16:27.594388238 +0000 UTC m=+31.928438437" observedRunningTime="2025-02-13 20:16:28.062361951 +0000 UTC m=+32.396412198" watchObservedRunningTime="2025-02-13 20:16:31.85114552 +0000 UTC m=+36.185195722" Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.851 [INFO][3887] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.851 [INFO][3887] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" iface="eth0" netns="/var/run/netns/cni-8e054952-16e8-b713-8e8d-f473b4adf591" Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.852 [INFO][3887] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" iface="eth0" netns="/var/run/netns/cni-8e054952-16e8-b713-8e8d-f473b4adf591" Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.852 [INFO][3887] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" iface="eth0" netns="/var/run/netns/cni-8e054952-16e8-b713-8e8d-f473b4adf591" Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.852 [INFO][3887] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.852 [INFO][3887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.877 [INFO][3893] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" HandleID="k8s-pod-network.d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.877 [INFO][3893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.877 [INFO][3893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.886 [WARNING][3893] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" HandleID="k8s-pod-network.d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.886 [INFO][3893] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" HandleID="k8s-pod-network.d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.888 [INFO][3893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:31.892355 containerd[1465]: 2025-02-13 20:16:31.890 [INFO][3887] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:31.896529 containerd[1465]: time="2025-02-13T20:16:31.892508580Z" level=info msg="TearDown network for sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\" successfully" Feb 13 20:16:31.896529 containerd[1465]: time="2025-02-13T20:16:31.892549093Z" level=info msg="StopPodSandbox for \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\" returns successfully" Feb 13 20:16:31.896529 containerd[1465]: time="2025-02-13T20:16:31.895492920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pbzwk,Uid:bf8b9894-04eb-4f05-8268-01b34a155c39,Namespace:kube-system,Attempt:1,}" Feb 13 20:16:31.899912 systemd[1]: run-netns-cni\x2d8e054952\x2d16e8\x2db713\x2d8e8d\x2df473b4adf591.mount: Deactivated successfully. Feb 13 20:16:32.057094 systemd-networkd[1383]: calida89eb5cfe0: Link UP Feb 13 20:16:32.057466 systemd-networkd[1383]: calida89eb5cfe0: Gained carrier Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:31.947 [INFO][3900] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:31.962 [INFO][3900] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0 coredns-6f6b679f8f- kube-system bf8b9894-04eb-4f05-8268-01b34a155c39 761 0 2025-02-13 20:16:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal coredns-6f6b679f8f-pbzwk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida89eb5cfe0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" Namespace="kube-system" Pod="coredns-6f6b679f8f-pbzwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:31.962 [INFO][3900] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" Namespace="kube-system" Pod="coredns-6f6b679f8f-pbzwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:31.998 [INFO][3910] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" HandleID="k8s-pod-network.e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.011 [INFO][3910] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" HandleID="k8s-pod-network.e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290830), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", "pod":"coredns-6f6b679f8f-pbzwk", "timestamp":"2025-02-13 20:16:31.998970955 +0000 UTC"}, Hostname:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.011 [INFO][3910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.011 [INFO][3910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.011 [INFO][3910] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal' Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.014 [INFO][3910] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.018 [INFO][3910] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.023 [INFO][3910] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.026 [INFO][3910] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.029 [INFO][3910] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.029 [INFO][3910] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.031 [INFO][3910] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.035 [INFO][3910] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.043 [INFO][3910] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.193/26] block=192.168.11.192/26 handle="k8s-pod-network.e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.043 [INFO][3910] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.193/26] handle="k8s-pod-network.e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.043 [INFO][3910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:32.085323 containerd[1465]: 2025-02-13 20:16:32.043 [INFO][3910] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.193/26] IPv6=[] ContainerID="e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" HandleID="k8s-pod-network.e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:32.087710 containerd[1465]: 2025-02-13 20:16:32.046 [INFO][3900] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" Namespace="kube-system" Pod="coredns-6f6b679f8f-pbzwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf8b9894-04eb-4f05-8268-01b34a155c39", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-6f6b679f8f-pbzwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida89eb5cfe0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:32.087710 containerd[1465]: 2025-02-13 20:16:32.046 [INFO][3900] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.193/32] ContainerID="e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" Namespace="kube-system" Pod="coredns-6f6b679f8f-pbzwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:32.087710 containerd[1465]: 2025-02-13 20:16:32.046 [INFO][3900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida89eb5cfe0 ContainerID="e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" Namespace="kube-system" Pod="coredns-6f6b679f8f-pbzwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:32.087710 containerd[1465]: 2025-02-13 20:16:32.056 [INFO][3900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" Namespace="kube-system" Pod="coredns-6f6b679f8f-pbzwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:32.087710 containerd[1465]: 2025-02-13 20:16:32.060 [INFO][3900] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" Namespace="kube-system" Pod="coredns-6f6b679f8f-pbzwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf8b9894-04eb-4f05-8268-01b34a155c39", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a", Pod:"coredns-6f6b679f8f-pbzwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida89eb5cfe0", MAC:"32:1d:2d:49:c5:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:32.087710 containerd[1465]: 2025-02-13 20:16:32.083 [INFO][3900] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a" Namespace="kube-system" Pod="coredns-6f6b679f8f-pbzwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:32.112245 containerd[1465]: time="2025-02-13T20:16:32.112113885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:32.113300 containerd[1465]: time="2025-02-13T20:16:32.113041607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:32.113300 containerd[1465]: time="2025-02-13T20:16:32.113074535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:32.113895 containerd[1465]: time="2025-02-13T20:16:32.113220248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:32.146341 systemd[1]: Started cri-containerd-e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a.scope - libcontainer container e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a. Feb 13 20:16:32.200757 containerd[1465]: time="2025-02-13T20:16:32.200706792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pbzwk,Uid:bf8b9894-04eb-4f05-8268-01b34a155c39,Namespace:kube-system,Attempt:1,} returns sandbox id \"e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a\"" Feb 13 20:16:32.205955 containerd[1465]: time="2025-02-13T20:16:32.205906152Z" level=info msg="CreateContainer within sandbox \"e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:16:32.209193 kubelet[2545]: I0213 20:16:32.209158 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:32.228684 containerd[1465]: time="2025-02-13T20:16:32.228618394Z" level=info msg="CreateContainer within sandbox \"e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab785444fee8c41325e14d9c4f593751f967614da220d9bf7e0513a1070c0054\"" Feb 13 20:16:32.233347 containerd[1465]: time="2025-02-13T20:16:32.233272269Z" level=info msg="StartContainer for \"ab785444fee8c41325e14d9c4f593751f967614da220d9bf7e0513a1070c0054\"" Feb 13 20:16:32.282252 systemd[1]: Started cri-containerd-ab785444fee8c41325e14d9c4f593751f967614da220d9bf7e0513a1070c0054.scope - libcontainer container ab785444fee8c41325e14d9c4f593751f967614da220d9bf7e0513a1070c0054. Feb 13 20:16:32.325087 containerd[1465]: time="2025-02-13T20:16:32.324956969Z" level=info msg="StartContainer for \"ab785444fee8c41325e14d9c4f593751f967614da220d9bf7e0513a1070c0054\" returns successfully" Feb 13 20:16:32.792373 containerd[1465]: time="2025-02-13T20:16:32.791461758Z" level=info msg="StopPodSandbox for \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\"" Feb 13 20:16:32.840311 kernel: bpftool[4062]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.920 [INFO][4054] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.920 [INFO][4054] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" iface="eth0" netns="/var/run/netns/cni-294941fb-79fe-dfd5-d531-3b3d0a6ec337" Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.920 [INFO][4054] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" iface="eth0" netns="/var/run/netns/cni-294941fb-79fe-dfd5-d531-3b3d0a6ec337" Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.921 [INFO][4054] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" iface="eth0" netns="/var/run/netns/cni-294941fb-79fe-dfd5-d531-3b3d0a6ec337" Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.921 [INFO][4054] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.921 [INFO][4054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.973 [INFO][4067] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" HandleID="k8s-pod-network.19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.974 [INFO][4067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.974 [INFO][4067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.987 [WARNING][4067] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" HandleID="k8s-pod-network.19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.988 [INFO][4067] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" HandleID="k8s-pod-network.19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.990 [INFO][4067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:32.996355 containerd[1465]: 2025-02-13 20:16:32.994 [INFO][4054] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:32.997097 containerd[1465]: time="2025-02-13T20:16:32.996618354Z" level=info msg="TearDown network for sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\" successfully" Feb 13 20:16:32.997097 containerd[1465]: time="2025-02-13T20:16:32.996659073Z" level=info msg="StopPodSandbox for \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\" returns successfully" Feb 13 20:16:33.000286 containerd[1465]: time="2025-02-13T20:16:32.997937280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-645b8d968-sl4b8,Uid:061fba9c-316a-4909-a848-0cb5a7c86a19,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:16:33.004822 systemd[1]: run-netns-cni\x2d294941fb\x2d79fe\x2ddfd5\x2dd531\x2d3b3d0a6ec337.mount: Deactivated successfully. Feb 13 20:16:33.098084 kubelet[2545]: I0213 20:16:33.095365 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pbzwk" podStartSLOduration=31.095338192 podStartE2EDuration="31.095338192s" podCreationTimestamp="2025-02-13 20:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:16:33.07543346 +0000 UTC m=+37.409483664" watchObservedRunningTime="2025-02-13 20:16:33.095338192 +0000 UTC m=+37.429388394" Feb 13 20:16:33.275507 systemd-networkd[1383]: cali795bdc98f88: Link UP Feb 13 20:16:33.276249 systemd-networkd[1383]: cali795bdc98f88: Gained carrier Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.126 [INFO][4074] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0 calico-apiserver-645b8d968- calico-apiserver 061fba9c-316a-4909-a848-0cb5a7c86a19 776 0 2025-02-13 20:16:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:645b8d968 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal calico-apiserver-645b8d968-sl4b8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali795bdc98f88 [] []}} ContainerID="eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-sl4b8" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.126 [INFO][4074] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-sl4b8" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.211 [INFO][4089] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" HandleID="k8s-pod-network.eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.224 [INFO][4089] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" HandleID="k8s-pod-network.eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", "pod":"calico-apiserver-645b8d968-sl4b8", "timestamp":"2025-02-13 20:16:33.210773853 +0000 UTC"}, Hostname:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.225 [INFO][4089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.225 [INFO][4089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.226 [INFO][4089] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal' Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.228 [INFO][4089] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.234 [INFO][4089] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.241 [INFO][4089] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.244 [INFO][4089] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.249 [INFO][4089] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.249 [INFO][4089] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.252 [INFO][4089] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786 Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.257 [INFO][4089] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.266 [INFO][4089] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.194/26] block=192.168.11.192/26 handle="k8s-pod-network.eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.267 [INFO][4089] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.194/26] handle="k8s-pod-network.eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.267 [INFO][4089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:33.303005 containerd[1465]: 2025-02-13 20:16:33.267 [INFO][4089] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.194/26] IPv6=[] ContainerID="eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" HandleID="k8s-pod-network.eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:33.305829 containerd[1465]: 2025-02-13 20:16:33.269 [INFO][4074] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-sl4b8" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0", GenerateName:"calico-apiserver-645b8d968-", Namespace:"calico-apiserver", SelfLink:"", UID:"061fba9c-316a-4909-a848-0cb5a7c86a19", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"645b8d968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-645b8d968-sl4b8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali795bdc98f88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:33.305829 containerd[1465]: 2025-02-13 20:16:33.269 [INFO][4074] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.194/32] ContainerID="eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-sl4b8" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:33.305829 containerd[1465]: 2025-02-13 20:16:33.269 [INFO][4074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali795bdc98f88 ContainerID="eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-sl4b8" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:33.305829 containerd[1465]: 2025-02-13 20:16:33.278 [INFO][4074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-sl4b8" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:33.305829 containerd[1465]: 2025-02-13 20:16:33.280 [INFO][4074] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-sl4b8" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0", GenerateName:"calico-apiserver-645b8d968-", Namespace:"calico-apiserver", SelfLink:"", UID:"061fba9c-316a-4909-a848-0cb5a7c86a19", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"645b8d968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786", Pod:"calico-apiserver-645b8d968-sl4b8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali795bdc98f88", MAC:"f6:90:32:75:0a:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:33.305829 containerd[1465]: 2025-02-13 20:16:33.300 [INFO][4074] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-sl4b8" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:33.349687 containerd[1465]: time="2025-02-13T20:16:33.349239929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:33.349687 containerd[1465]: time="2025-02-13T20:16:33.349334191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:33.349687 containerd[1465]: time="2025-02-13T20:16:33.349362503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:33.349687 containerd[1465]: time="2025-02-13T20:16:33.349480159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:33.412403 systemd[1]: Started cri-containerd-eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786.scope - libcontainer container eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786. Feb 13 20:16:33.491548 containerd[1465]: time="2025-02-13T20:16:33.491354582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-645b8d968-sl4b8,Uid:061fba9c-316a-4909-a848-0cb5a7c86a19,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786\"" Feb 13 20:16:33.494678 containerd[1465]: time="2025-02-13T20:16:33.494639126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:16:33.634640 systemd-networkd[1383]: vxlan.calico: Link UP Feb 13 20:16:33.634652 systemd-networkd[1383]: vxlan.calico: Gained carrier Feb 13 20:16:33.691782 systemd-networkd[1383]: calida89eb5cfe0: Gained IPv6LL Feb 13 20:16:33.797642 containerd[1465]: time="2025-02-13T20:16:33.797504498Z" level=info msg="StopPodSandbox for \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\"" Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.889 [INFO][4211] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.890 [INFO][4211] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" iface="eth0" netns="/var/run/netns/cni-eb652c6b-10a9-2e2e-75fc-87cc16156e30" Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.890 [INFO][4211] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" iface="eth0" netns="/var/run/netns/cni-eb652c6b-10a9-2e2e-75fc-87cc16156e30" Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.891 [INFO][4211] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" iface="eth0" netns="/var/run/netns/cni-eb652c6b-10a9-2e2e-75fc-87cc16156e30" Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.891 [INFO][4211] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.891 [INFO][4211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.938 [INFO][4217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" HandleID="k8s-pod-network.95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.938 [INFO][4217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.938 [INFO][4217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.948 [WARNING][4217] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" HandleID="k8s-pod-network.95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.948 [INFO][4217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" HandleID="k8s-pod-network.95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.950 [INFO][4217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:33.953900 containerd[1465]: 2025-02-13 20:16:33.952 [INFO][4211] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:33.961223 containerd[1465]: time="2025-02-13T20:16:33.956245955Z" level=info msg="TearDown network for sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\" successfully" Feb 13 20:16:33.961223 containerd[1465]: time="2025-02-13T20:16:33.956295234Z" level=info msg="StopPodSandbox for \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\" returns successfully" Feb 13 20:16:33.961223 containerd[1465]: time="2025-02-13T20:16:33.957219482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vtgvf,Uid:55870946-44e5-4646-b49c-964c3d25ad4a,Namespace:calico-system,Attempt:1,}" Feb 13 20:16:33.966816 systemd[1]: run-netns-cni\x2deb652c6b\x2d10a9\x2d2e2e\x2d75fc\x2d87cc16156e30.mount: Deactivated successfully. Feb 13 20:16:34.232439 systemd-networkd[1383]: cali3e149c17285: Link UP Feb 13 20:16:34.234921 systemd-networkd[1383]: cali3e149c17285: Gained carrier Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.070 [INFO][4224] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0 csi-node-driver- calico-system 55870946-44e5-4646-b49c-964c3d25ad4a 792 0 2025-02-13 20:16:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal csi-node-driver-vtgvf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3e149c17285 [] []}} ContainerID="5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" Namespace="calico-system" Pod="csi-node-driver-vtgvf" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.072 [INFO][4224] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" Namespace="calico-system" Pod="csi-node-driver-vtgvf" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.152 [INFO][4248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" HandleID="k8s-pod-network.5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.168 [INFO][4248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" HandleID="k8s-pod-network.5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a9780), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", "pod":"csi-node-driver-vtgvf", "timestamp":"2025-02-13 20:16:34.152314569 +0000 UTC"}, Hostname:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.168 [INFO][4248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.168 [INFO][4248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.168 [INFO][4248] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal' Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.171 [INFO][4248] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.178 [INFO][4248] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.188 [INFO][4248] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.192 [INFO][4248] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.197 [INFO][4248] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.197 [INFO][4248] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.199 [INFO][4248] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01 Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.206 [INFO][4248] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.221 [INFO][4248] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.195/26] block=192.168.11.192/26 handle="k8s-pod-network.5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.223 [INFO][4248] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.195/26] handle="k8s-pod-network.5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.223 [INFO][4248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:34.257579 containerd[1465]: 2025-02-13 20:16:34.223 [INFO][4248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.195/26] IPv6=[] ContainerID="5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" HandleID="k8s-pod-network.5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:34.259752 containerd[1465]: 2025-02-13 20:16:34.226 [INFO][4224] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" Namespace="calico-system" Pod="csi-node-driver-vtgvf" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"55870946-44e5-4646-b49c-964c3d25ad4a", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-vtgvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e149c17285", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:34.259752 containerd[1465]: 2025-02-13 20:16:34.226 [INFO][4224] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.195/32] ContainerID="5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" Namespace="calico-system" Pod="csi-node-driver-vtgvf" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:34.259752 containerd[1465]: 2025-02-13 20:16:34.226 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e149c17285 ContainerID="5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" Namespace="calico-system" Pod="csi-node-driver-vtgvf" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:34.259752 containerd[1465]: 2025-02-13 20:16:34.231 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" Namespace="calico-system" Pod="csi-node-driver-vtgvf" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:34.259752 containerd[1465]: 2025-02-13 20:16:34.233 [INFO][4224] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" Namespace="calico-system" Pod="csi-node-driver-vtgvf" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"55870946-44e5-4646-b49c-964c3d25ad4a", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01", Pod:"csi-node-driver-vtgvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e149c17285", MAC:"c6:35:a0:00:c5:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:34.259752 containerd[1465]: 2025-02-13 20:16:34.253 [INFO][4224] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01" Namespace="calico-system" Pod="csi-node-driver-vtgvf" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:34.298925 containerd[1465]: time="2025-02-13T20:16:34.298657247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:34.298925 containerd[1465]: time="2025-02-13T20:16:34.298726513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:34.298925 containerd[1465]: time="2025-02-13T20:16:34.298854890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:34.300151 containerd[1465]: time="2025-02-13T20:16:34.299395912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:34.331542 systemd-networkd[1383]: cali795bdc98f88: Gained IPv6LL Feb 13 20:16:34.346761 systemd[1]: Started cri-containerd-5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01.scope - libcontainer container 5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01. Feb 13 20:16:34.381308 containerd[1465]: time="2025-02-13T20:16:34.381258198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vtgvf,Uid:55870946-44e5-4646-b49c-964c3d25ad4a,Namespace:calico-system,Attempt:1,} returns sandbox id \"5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01\"" Feb 13 20:16:34.790615 containerd[1465]: time="2025-02-13T20:16:34.790550510Z" level=info msg="StopPodSandbox for \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\"" Feb 13 20:16:34.793391 containerd[1465]: time="2025-02-13T20:16:34.792944496Z" level=info msg="StopPodSandbox for \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\"" Feb 13 20:16:34.798266 containerd[1465]: time="2025-02-13T20:16:34.798214426Z" level=info msg="StopPodSandbox for \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\"" Feb 13 20:16:34.843506 systemd-networkd[1383]: vxlan.calico: Gained IPv6LL Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:34.968 [INFO][4378] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:34.969 [INFO][4378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" iface="eth0" netns="/var/run/netns/cni-aa5665af-e849-b88a-42f2-73f1c5bae3ef" Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:34.970 [INFO][4378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" iface="eth0" netns="/var/run/netns/cni-aa5665af-e849-b88a-42f2-73f1c5bae3ef" Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:34.972 [INFO][4378] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" iface="eth0" netns="/var/run/netns/cni-aa5665af-e849-b88a-42f2-73f1c5bae3ef" Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:34.972 [INFO][4378] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:34.972 [INFO][4378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:35.051 [INFO][4396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" HandleID="k8s-pod-network.61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:35.052 [INFO][4396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:35.052 [INFO][4396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:35.064 [WARNING][4396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" HandleID="k8s-pod-network.61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:35.064 [INFO][4396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" HandleID="k8s-pod-network.61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:35.068 [INFO][4396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:35.077087 containerd[1465]: 2025-02-13 20:16:35.070 [INFO][4378] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:35.088723 systemd[1]: run-netns-cni\x2daa5665af\x2de849\x2db88a\x2d42f2\x2d73f1c5bae3ef.mount: Deactivated successfully. Feb 13 20:16:35.091867 containerd[1465]: time="2025-02-13T20:16:35.091818176Z" level=info msg="TearDown network for sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\" successfully" Feb 13 20:16:35.092600 containerd[1465]: time="2025-02-13T20:16:35.092011325Z" level=info msg="StopPodSandbox for \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\" returns successfully" Feb 13 20:16:35.093480 containerd[1465]: time="2025-02-13T20:16:35.092957950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-645b8d968-55plt,Uid:c63b30a8-8e62-4267-9609-912d1a8617c5,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:34.960 [INFO][4377] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:34.961 [INFO][4377] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" iface="eth0" netns="/var/run/netns/cni-ae99deac-072d-c09b-0733-7d24b83e046d" Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:34.961 [INFO][4377] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" iface="eth0" netns="/var/run/netns/cni-ae99deac-072d-c09b-0733-7d24b83e046d" Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:34.963 [INFO][4377] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" iface="eth0" netns="/var/run/netns/cni-ae99deac-072d-c09b-0733-7d24b83e046d" Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:34.963 [INFO][4377] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:34.966 [INFO][4377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:35.102 [INFO][4395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" HandleID="k8s-pod-network.172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:35.103 [INFO][4395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:35.103 [INFO][4395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:35.121 [WARNING][4395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" HandleID="k8s-pod-network.172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:35.121 [INFO][4395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" HandleID="k8s-pod-network.172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:35.124 [INFO][4395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:35.154523 containerd[1465]: 2025-02-13 20:16:35.147 [INFO][4377] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:35.158546 containerd[1465]: time="2025-02-13T20:16:35.157103849Z" level=info msg="TearDown network for sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\" successfully" Feb 13 20:16:35.160476 containerd[1465]: time="2025-02-13T20:16:35.160331510Z" level=info msg="StopPodSandbox for \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\" returns successfully" Feb 13 20:16:35.161550 systemd[1]: run-netns-cni\x2dae99deac\x2d072d\x2dc09b\x2d0733\x2d7d24b83e046d.mount: Deactivated successfully. Feb 13 20:16:35.166225 containerd[1465]: time="2025-02-13T20:16:35.165257002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658869675d-mqtbl,Uid:c79cadd8-8457-48ba-9385-1ff5bfefcfc8,Namespace:calico-system,Attempt:1,}" Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.002 [INFO][4376] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.004 [INFO][4376] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" iface="eth0" netns="/var/run/netns/cni-867d0867-d980-bf6e-288f-5a76684758cb" Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.005 [INFO][4376] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" iface="eth0" netns="/var/run/netns/cni-867d0867-d980-bf6e-288f-5a76684758cb" Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.007 [INFO][4376] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" iface="eth0" netns="/var/run/netns/cni-867d0867-d980-bf6e-288f-5a76684758cb" Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.008 [INFO][4376] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.008 [INFO][4376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.115 [INFO][4403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" HandleID="k8s-pod-network.748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.116 [INFO][4403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.125 [INFO][4403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.152 [WARNING][4403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" HandleID="k8s-pod-network.748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.152 [INFO][4403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" HandleID="k8s-pod-network.748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.156 [INFO][4403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:35.170320 containerd[1465]: 2025-02-13 20:16:35.167 [INFO][4376] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:35.175572 containerd[1465]: time="2025-02-13T20:16:35.173462088Z" level=info msg="TearDown network for sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\" successfully" Feb 13 20:16:35.180938 systemd[1]: run-netns-cni\x2d867d0867\x2dd980\x2dbf6e\x2d288f\x2d5a76684758cb.mount: Deactivated successfully. Feb 13 20:16:35.185166 containerd[1465]: time="2025-02-13T20:16:35.177998413Z" level=info msg="StopPodSandbox for \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\" returns successfully" Feb 13 20:16:35.186094 containerd[1465]: time="2025-02-13T20:16:35.186062900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9hwk,Uid:ab60a360-887a-466f-9f36-830c771a9b75,Namespace:kube-system,Attempt:1,}" Feb 13 20:16:35.502961 systemd-networkd[1383]: calidcda961bbee: Link UP Feb 13 20:16:35.510056 systemd-networkd[1383]: calidcda961bbee: Gained carrier Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.247 [INFO][4413] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0 calico-apiserver-645b8d968- calico-apiserver c63b30a8-8e62-4267-9609-912d1a8617c5 805 0 2025-02-13 20:16:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:645b8d968 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal calico-apiserver-645b8d968-55plt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidcda961bbee [] []}} ContainerID="0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-55plt" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.247 [INFO][4413] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-55plt" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.376 [INFO][4446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" HandleID="k8s-pod-network.0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.402 [INFO][4446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" HandleID="k8s-pod-network.0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", "pod":"calico-apiserver-645b8d968-55plt", "timestamp":"2025-02-13 20:16:35.372624468 +0000 UTC"}, Hostname:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.403 [INFO][4446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.403 [INFO][4446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.403 [INFO][4446] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal' Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.409 [INFO][4446] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.421 [INFO][4446] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.434 [INFO][4446] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.437 [INFO][4446] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.444 [INFO][4446] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.444 [INFO][4446] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.447 [INFO][4446] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6 Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.458 [INFO][4446] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.477 [INFO][4446] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.196/26] block=192.168.11.192/26 handle="k8s-pod-network.0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.478 [INFO][4446] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.196/26] handle="k8s-pod-network.0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.478 [INFO][4446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:35.555154 containerd[1465]: 2025-02-13 20:16:35.478 [INFO][4446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.196/26] IPv6=[] ContainerID="0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" HandleID="k8s-pod-network.0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:35.556919 containerd[1465]: 2025-02-13 20:16:35.485 [INFO][4413] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-55plt" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0", GenerateName:"calico-apiserver-645b8d968-", Namespace:"calico-apiserver", SelfLink:"", UID:"c63b30a8-8e62-4267-9609-912d1a8617c5", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"645b8d968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-645b8d968-55plt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcda961bbee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:35.556919 containerd[1465]: 2025-02-13 20:16:35.486 [INFO][4413] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.196/32] ContainerID="0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-55plt" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:35.556919 containerd[1465]: 2025-02-13 20:16:35.487 [INFO][4413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcda961bbee ContainerID="0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-55plt" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:35.556919 containerd[1465]: 2025-02-13 20:16:35.510 [INFO][4413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-55plt" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:35.556919 containerd[1465]: 2025-02-13 20:16:35.514 [INFO][4413] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-55plt" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0", GenerateName:"calico-apiserver-645b8d968-", Namespace:"calico-apiserver", SelfLink:"", UID:"c63b30a8-8e62-4267-9609-912d1a8617c5", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"645b8d968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6", Pod:"calico-apiserver-645b8d968-55plt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcda961bbee", MAC:"4a:fe:de:f9:81:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:35.556919 containerd[1465]: 2025-02-13 20:16:35.548 [INFO][4413] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6" Namespace="calico-apiserver" Pod="calico-apiserver-645b8d968-55plt" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:35.629434 containerd[1465]: time="2025-02-13T20:16:35.629228516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:35.629434 containerd[1465]: time="2025-02-13T20:16:35.629317300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:35.629434 containerd[1465]: time="2025-02-13T20:16:35.629346040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:35.632512 containerd[1465]: time="2025-02-13T20:16:35.632020968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:35.683084 systemd-networkd[1383]: cali742bfec23bd: Link UP Feb 13 20:16:35.684385 systemd-networkd[1383]: cali742bfec23bd: Gained carrier Feb 13 20:16:35.684442 systemd[1]: Started cri-containerd-0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6.scope - libcontainer container 0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6. Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.345 [INFO][4425] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0 coredns-6f6b679f8f- kube-system ab60a360-887a-466f-9f36-830c771a9b75 806 0 2025-02-13 20:16:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal coredns-6f6b679f8f-v9hwk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali742bfec23bd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.346 [INFO][4425] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.461 [INFO][4457] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" HandleID="k8s-pod-network.a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.517 [INFO][4457] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" HandleID="k8s-pod-network.a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bac20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", "pod":"coredns-6f6b679f8f-v9hwk", "timestamp":"2025-02-13 20:16:35.461958805 +0000 UTC"}, Hostname:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.517 [INFO][4457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.517 [INFO][4457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.517 [INFO][4457] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal' Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.532 [INFO][4457] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.602 [INFO][4457] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.615 [INFO][4457] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.619 [INFO][4457] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.624 [INFO][4457] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.624 [INFO][4457] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.627 [INFO][4457] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.640 [INFO][4457] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.660 [INFO][4457] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.197/26] block=192.168.11.192/26 handle="k8s-pod-network.a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.661 [INFO][4457] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.197/26] handle="k8s-pod-network.a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.663 [INFO][4457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:35.715177 containerd[1465]: 2025-02-13 20:16:35.663 [INFO][4457] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.197/26] IPv6=[] ContainerID="a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" HandleID="k8s-pod-network.a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:35.719584 containerd[1465]: 2025-02-13 20:16:35.667 [INFO][4425] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ab60a360-887a-466f-9f36-830c771a9b75", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-6f6b679f8f-v9hwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali742bfec23bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:35.719584 containerd[1465]: 2025-02-13 20:16:35.667 [INFO][4425] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.197/32] ContainerID="a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:35.719584 containerd[1465]: 2025-02-13 20:16:35.667 [INFO][4425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali742bfec23bd ContainerID="a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:35.719584 containerd[1465]: 2025-02-13 20:16:35.682 [INFO][4425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:35.719584 containerd[1465]: 2025-02-13 20:16:35.682 [INFO][4425] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ab60a360-887a-466f-9f36-830c771a9b75", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c", Pod:"coredns-6f6b679f8f-v9hwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali742bfec23bd", MAC:"56:32:a1:23:0f:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:35.719584 containerd[1465]: 2025-02-13 20:16:35.710 [INFO][4425] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hwk" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:35.809400 systemd-networkd[1383]: cali920e22d985e: Link UP Feb 13 20:16:35.817434 systemd-networkd[1383]: cali920e22d985e: Gained carrier Feb 13 20:16:35.857961 containerd[1465]: time="2025-02-13T20:16:35.857433664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:35.857961 containerd[1465]: time="2025-02-13T20:16:35.857508244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:35.857961 containerd[1465]: time="2025-02-13T20:16:35.857538159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:35.857961 containerd[1465]: time="2025-02-13T20:16:35.857692061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.382 [INFO][4436] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0 calico-kube-controllers-658869675d- calico-system c79cadd8-8457-48ba-9385-1ff5bfefcfc8 803 0 2025-02-13 20:16:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:658869675d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal calico-kube-controllers-658869675d-mqtbl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali920e22d985e [] []}} ContainerID="8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" Namespace="calico-system" Pod="calico-kube-controllers-658869675d-mqtbl" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.386 [INFO][4436] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" Namespace="calico-system" Pod="calico-kube-controllers-658869675d-mqtbl" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.508 [INFO][4463] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" HandleID="k8s-pod-network.8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.533 [INFO][4463] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" HandleID="k8s-pod-network.8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003149d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", "pod":"calico-kube-controllers-658869675d-mqtbl", "timestamp":"2025-02-13 20:16:35.508340823 +0000 UTC"}, Hostname:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.534 [INFO][4463] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.664 [INFO][4463] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.664 [INFO][4463] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal' Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.671 [INFO][4463] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.709 [INFO][4463] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.726 [INFO][4463] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.735 [INFO][4463] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.743 [INFO][4463] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.744 [INFO][4463] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.749 [INFO][4463] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2 Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.767 [INFO][4463] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.784 [INFO][4463] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.198/26] block=192.168.11.192/26 handle="k8s-pod-network.8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.785 [INFO][4463] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.198/26] handle="k8s-pod-network.8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" host="ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal" Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.785 [INFO][4463] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:35.882172 containerd[1465]: 2025-02-13 20:16:35.785 [INFO][4463] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.198/26] IPv6=[] ContainerID="8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" HandleID="k8s-pod-network.8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:35.883910 containerd[1465]: 2025-02-13 20:16:35.796 [INFO][4436] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" Namespace="calico-system" Pod="calico-kube-controllers-658869675d-mqtbl" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0", GenerateName:"calico-kube-controllers-658869675d-", Namespace:"calico-system", SelfLink:"", UID:"c79cadd8-8457-48ba-9385-1ff5bfefcfc8", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"658869675d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-658869675d-mqtbl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali920e22d985e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:35.883910 containerd[1465]: 2025-02-13 20:16:35.796 [INFO][4436] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.198/32] ContainerID="8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" Namespace="calico-system" Pod="calico-kube-controllers-658869675d-mqtbl" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:35.883910 containerd[1465]: 2025-02-13 20:16:35.796 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali920e22d985e ContainerID="8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" Namespace="calico-system" Pod="calico-kube-controllers-658869675d-mqtbl" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:35.883910 containerd[1465]: 2025-02-13 20:16:35.814 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" Namespace="calico-system" Pod="calico-kube-controllers-658869675d-mqtbl" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:35.883910 containerd[1465]: 2025-02-13 20:16:35.821 [INFO][4436] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" Namespace="calico-system" Pod="calico-kube-controllers-658869675d-mqtbl" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0", GenerateName:"calico-kube-controllers-658869675d-", Namespace:"calico-system", SelfLink:"", UID:"c79cadd8-8457-48ba-9385-1ff5bfefcfc8", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"658869675d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2", Pod:"calico-kube-controllers-658869675d-mqtbl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali920e22d985e", MAC:"0a:01:a5:2a:74:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:35.883910 containerd[1465]: 2025-02-13 20:16:35.872 [INFO][4436] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2" Namespace="calico-system" Pod="calico-kube-controllers-658869675d-mqtbl" WorkloadEndpoint="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:35.961382 systemd[1]: Started cri-containerd-a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c.scope - libcontainer container a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c. Feb 13 20:16:36.033148 containerd[1465]: time="2025-02-13T20:16:36.032341069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:36.033148 containerd[1465]: time="2025-02-13T20:16:36.032420013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:36.033148 containerd[1465]: time="2025-02-13T20:16:36.032449188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:36.033148 containerd[1465]: time="2025-02-13T20:16:36.032595924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:36.127355 systemd[1]: Started cri-containerd-8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2.scope - libcontainer container 8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2. Feb 13 20:16:36.188454 systemd-networkd[1383]: cali3e149c17285: Gained IPv6LL Feb 13 20:16:36.206330 containerd[1465]: time="2025-02-13T20:16:36.205809443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9hwk,Uid:ab60a360-887a-466f-9f36-830c771a9b75,Namespace:kube-system,Attempt:1,} returns sandbox id \"a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c\"" Feb 13 20:16:36.218853 containerd[1465]: time="2025-02-13T20:16:36.218701310Z" level=info msg="CreateContainer within sandbox \"a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:16:36.252522 containerd[1465]: time="2025-02-13T20:16:36.252231533Z" level=info msg="CreateContainer within sandbox \"a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f7852d89bfb0763d831f2ad804cc21c40a70d94be3073b27ae85d0904a79b35\"" Feb 13 20:16:36.255513 containerd[1465]: time="2025-02-13T20:16:36.254715912Z" level=info msg="StartContainer for \"3f7852d89bfb0763d831f2ad804cc21c40a70d94be3073b27ae85d0904a79b35\"" Feb 13 20:16:36.351789 containerd[1465]: time="2025-02-13T20:16:36.351730954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-645b8d968-55plt,Uid:c63b30a8-8e62-4267-9609-912d1a8617c5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6\"" Feb 13 20:16:36.358775 systemd[1]: Started cri-containerd-3f7852d89bfb0763d831f2ad804cc21c40a70d94be3073b27ae85d0904a79b35.scope - libcontainer container 3f7852d89bfb0763d831f2ad804cc21c40a70d94be3073b27ae85d0904a79b35. Feb 13 20:16:36.378829 containerd[1465]: time="2025-02-13T20:16:36.378661639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658869675d-mqtbl,Uid:c79cadd8-8457-48ba-9385-1ff5bfefcfc8,Namespace:calico-system,Attempt:1,} returns sandbox id \"8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2\"" Feb 13 20:16:36.432462 containerd[1465]: time="2025-02-13T20:16:36.432241921Z" level=info msg="StartContainer for \"3f7852d89bfb0763d831f2ad804cc21c40a70d94be3073b27ae85d0904a79b35\" returns successfully" Feb 13 20:16:36.635831 systemd-networkd[1383]: calidcda961bbee: Gained IPv6LL Feb 13 20:16:37.054445 containerd[1465]: time="2025-02-13T20:16:37.054373683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:37.055784 containerd[1465]: time="2025-02-13T20:16:37.055702541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:16:37.057195 containerd[1465]: time="2025-02-13T20:16:37.057090534Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:37.061576 containerd[1465]: time="2025-02-13T20:16:37.061504548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:37.062622 containerd[1465]: time="2025-02-13T20:16:37.062463126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.567773407s" Feb 13 20:16:37.062622 containerd[1465]: time="2025-02-13T20:16:37.062511175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:16:37.064243 containerd[1465]: time="2025-02-13T20:16:37.063928874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:16:37.066431 containerd[1465]: time="2025-02-13T20:16:37.066254512Z" level=info msg="CreateContainer within sandbox \"eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:16:37.084150 systemd-networkd[1383]: cali920e22d985e: Gained IPv6LL Feb 13 20:16:37.088049 containerd[1465]: time="2025-02-13T20:16:37.087917789Z" level=info msg="CreateContainer within sandbox \"eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3d376025fb9c2c6e1cd07e346a0992b07d1bd64636af61be0967da3a757c750d\"" Feb 13 20:16:37.091507 containerd[1465]: time="2025-02-13T20:16:37.090918189Z" level=info msg="StartContainer for \"3d376025fb9c2c6e1cd07e346a0992b07d1bd64636af61be0967da3a757c750d\"" Feb 13 20:16:37.171187 kubelet[2545]: I0213 20:16:37.168951 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v9hwk" podStartSLOduration=35.168923016 podStartE2EDuration="35.168923016s" podCreationTimestamp="2025-02-13 20:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:16:37.127309809 +0000 UTC m=+41.461360050" watchObservedRunningTime="2025-02-13 20:16:37.168923016 +0000 UTC m=+41.502973226" Feb 13 20:16:37.182315 systemd[1]: Started cri-containerd-3d376025fb9c2c6e1cd07e346a0992b07d1bd64636af61be0967da3a757c750d.scope - libcontainer container 3d376025fb9c2c6e1cd07e346a0992b07d1bd64636af61be0967da3a757c750d. Feb 13 20:16:37.211451 systemd-networkd[1383]: cali742bfec23bd: Gained IPv6LL Feb 13 20:16:37.275663 containerd[1465]: time="2025-02-13T20:16:37.275339858Z" level=info msg="StartContainer for \"3d376025fb9c2c6e1cd07e346a0992b07d1bd64636af61be0967da3a757c750d\" returns successfully" Feb 13 20:16:38.152238 kubelet[2545]: I0213 20:16:38.151285 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-645b8d968-sl4b8" podStartSLOduration=26.581243139 podStartE2EDuration="30.151260058s" podCreationTimestamp="2025-02-13 20:16:08 +0000 UTC" firstStartedPulling="2025-02-13 20:16:33.493668058 +0000 UTC m=+37.827718263" lastFinishedPulling="2025-02-13 20:16:37.063684981 +0000 UTC m=+41.397735182" observedRunningTime="2025-02-13 20:16:38.150537853 +0000 UTC m=+42.484588063" watchObservedRunningTime="2025-02-13 20:16:38.151260058 +0000 UTC m=+42.485310244" Feb 13 20:16:38.413737 containerd[1465]: time="2025-02-13T20:16:38.413236615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:38.416012 containerd[1465]: time="2025-02-13T20:16:38.415940513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:16:38.417345 containerd[1465]: time="2025-02-13T20:16:38.417164152Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:38.422273 containerd[1465]: time="2025-02-13T20:16:38.421917770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:38.424864 containerd[1465]: time="2025-02-13T20:16:38.424748058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.360776005s" Feb 13 20:16:38.425877 containerd[1465]: time="2025-02-13T20:16:38.425026873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:16:38.428227 containerd[1465]: time="2025-02-13T20:16:38.427850835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:16:38.429392 containerd[1465]: time="2025-02-13T20:16:38.429355987Z" level=info msg="CreateContainer within sandbox \"5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:16:38.460812 containerd[1465]: time="2025-02-13T20:16:38.460691356Z" level=info msg="CreateContainer within sandbox \"5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"89b842287a2c0caaf172775f2d2557d4cc07bb9ec2d7178433651e8630568256\"" Feb 13 20:16:38.463068 containerd[1465]: time="2025-02-13T20:16:38.462988853Z" level=info msg="StartContainer for \"89b842287a2c0caaf172775f2d2557d4cc07bb9ec2d7178433651e8630568256\"" Feb 13 20:16:38.465467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115878734.mount: Deactivated successfully. Feb 13 20:16:38.528403 systemd[1]: Started cri-containerd-89b842287a2c0caaf172775f2d2557d4cc07bb9ec2d7178433651e8630568256.scope - libcontainer container 89b842287a2c0caaf172775f2d2557d4cc07bb9ec2d7178433651e8630568256. Feb 13 20:16:38.600655 containerd[1465]: time="2025-02-13T20:16:38.599202943Z" level=info msg="StartContainer for \"89b842287a2c0caaf172775f2d2557d4cc07bb9ec2d7178433651e8630568256\" returns successfully" Feb 13 20:16:38.708711 containerd[1465]: time="2025-02-13T20:16:38.708647476Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:38.709805 containerd[1465]: time="2025-02-13T20:16:38.709719916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:16:38.712641 containerd[1465]: time="2025-02-13T20:16:38.712581157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 284.679835ms" Feb 13 20:16:38.712641 containerd[1465]: time="2025-02-13T20:16:38.712636979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:16:38.714196 containerd[1465]: time="2025-02-13T20:16:38.713830582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:16:38.715798 containerd[1465]: time="2025-02-13T20:16:38.715757699Z" level=info msg="CreateContainer within sandbox \"0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:16:38.733947 containerd[1465]: time="2025-02-13T20:16:38.733893613Z" level=info msg="CreateContainer within sandbox \"0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"62e8eea27f55f3344ed18ab5305f0a38becd510eaf0ac7391873f66795fb5ac8\"" Feb 13 20:16:38.734834 containerd[1465]: time="2025-02-13T20:16:38.734670400Z" level=info msg="StartContainer for \"62e8eea27f55f3344ed18ab5305f0a38becd510eaf0ac7391873f66795fb5ac8\"" Feb 13 20:16:38.771370 systemd[1]: Started cri-containerd-62e8eea27f55f3344ed18ab5305f0a38becd510eaf0ac7391873f66795fb5ac8.scope - libcontainer container 62e8eea27f55f3344ed18ab5305f0a38becd510eaf0ac7391873f66795fb5ac8. Feb 13 20:16:38.850333 containerd[1465]: time="2025-02-13T20:16:38.850260029Z" level=info msg="StartContainer for \"62e8eea27f55f3344ed18ab5305f0a38becd510eaf0ac7391873f66795fb5ac8\" returns successfully" Feb 13 20:16:39.146991 kubelet[2545]: I0213 20:16:39.146423 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-645b8d968-55plt" podStartSLOduration=28.789053483 podStartE2EDuration="31.146315909s" podCreationTimestamp="2025-02-13 20:16:08 +0000 UTC" firstStartedPulling="2025-02-13 20:16:36.3563776 +0000 UTC m=+40.690427795" lastFinishedPulling="2025-02-13 20:16:38.713640017 +0000 UTC m=+43.047690221" observedRunningTime="2025-02-13 20:16:39.145275434 +0000 UTC m=+43.479325642" watchObservedRunningTime="2025-02-13 20:16:39.146315909 +0000 UTC m=+43.480366122" Feb 13 20:16:40.103553 ntpd[1434]: Listen normally on 8 vxlan.calico 192.168.11.192:123 Feb 13 20:16:40.103682 ntpd[1434]: Listen normally on 9 calida89eb5cfe0 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 20:16:40.104691 ntpd[1434]: 13 Feb 20:16:40 ntpd[1434]: Listen normally on 8 vxlan.calico 192.168.11.192:123 Feb 13 20:16:40.104691 ntpd[1434]: 13 Feb 20:16:40 ntpd[1434]: Listen normally on 9 calida89eb5cfe0 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 20:16:40.104691 ntpd[1434]: 13 Feb 20:16:40 ntpd[1434]: Listen normally on 10 cali795bdc98f88 [fe80::ecee:eeff:feee:eeee%5]:123 Feb 13 20:16:40.104691 ntpd[1434]: 13 Feb 20:16:40 ntpd[1434]: Listen normally on 11 vxlan.calico [fe80::6410:7dff:feb4:2a26%6]:123 Feb 13 20:16:40.104691 ntpd[1434]: 13 Feb 20:16:40 ntpd[1434]: Listen normally on 12 cali3e149c17285 [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 20:16:40.104691 ntpd[1434]: 13 Feb 20:16:40 ntpd[1434]: Listen normally on 13 calidcda961bbee [fe80::ecee:eeff:feee:eeee%10]:123 Feb 13 20:16:40.104691 ntpd[1434]: 13 Feb 20:16:40 ntpd[1434]: Listen normally on 14 cali742bfec23bd [fe80::ecee:eeff:feee:eeee%11]:123 Feb 13 20:16:40.104691 ntpd[1434]: 13 Feb 20:16:40 ntpd[1434]: Listen normally on 15 cali920e22d985e [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 20:16:40.103769 ntpd[1434]: Listen normally on 10 cali795bdc98f88 [fe80::ecee:eeff:feee:eeee%5]:123 Feb 13 20:16:40.103837 ntpd[1434]: Listen normally on 11 vxlan.calico [fe80::6410:7dff:feb4:2a26%6]:123 Feb 13 20:16:40.103894 ntpd[1434]: Listen normally on 12 cali3e149c17285 [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 20:16:40.103960 ntpd[1434]: Listen normally on 13 calidcda961bbee [fe80::ecee:eeff:feee:eeee%10]:123 Feb 13 20:16:40.104013 ntpd[1434]: Listen normally on 14 cali742bfec23bd [fe80::ecee:eeff:feee:eeee%11]:123 Feb 13 20:16:40.104091 ntpd[1434]: Listen normally on 15 cali920e22d985e [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 20:16:40.130323 kubelet[2545]: I0213 20:16:40.129071 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:41.466443 containerd[1465]: time="2025-02-13T20:16:41.466296348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:41.468541 containerd[1465]: time="2025-02-13T20:16:41.468443722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:16:41.470013 containerd[1465]: time="2025-02-13T20:16:41.469964551Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:41.475716 containerd[1465]: time="2025-02-13T20:16:41.475021063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:41.476903 containerd[1465]: time="2025-02-13T20:16:41.476852656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.762971426s" Feb 13 20:16:41.477037 containerd[1465]: time="2025-02-13T20:16:41.476907183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:16:41.479068 containerd[1465]: time="2025-02-13T20:16:41.478827080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:16:41.500877 containerd[1465]: time="2025-02-13T20:16:41.500823631Z" level=info msg="CreateContainer within sandbox \"8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:16:41.530225 containerd[1465]: time="2025-02-13T20:16:41.530110668Z" level=info msg="CreateContainer within sandbox \"8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9fcd390c84bb3d4fe2184f3c2da4291279080e84b7599c20680395ac3308c67f\"" Feb 13 20:16:41.531932 containerd[1465]: time="2025-02-13T20:16:41.531891618Z" level=info msg="StartContainer for \"9fcd390c84bb3d4fe2184f3c2da4291279080e84b7599c20680395ac3308c67f\"" Feb 13 20:16:41.598381 systemd[1]: Started cri-containerd-9fcd390c84bb3d4fe2184f3c2da4291279080e84b7599c20680395ac3308c67f.scope - libcontainer container 9fcd390c84bb3d4fe2184f3c2da4291279080e84b7599c20680395ac3308c67f. Feb 13 20:16:41.680068 containerd[1465]: time="2025-02-13T20:16:41.679899788Z" level=info msg="StartContainer for \"9fcd390c84bb3d4fe2184f3c2da4291279080e84b7599c20680395ac3308c67f\" returns successfully" Feb 13 20:16:42.336499 kubelet[2545]: I0213 20:16:42.336348 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:42.363027 kubelet[2545]: I0213 20:16:42.362239 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-658869675d-mqtbl" podStartSLOduration=28.265406292 podStartE2EDuration="33.362215191s" podCreationTimestamp="2025-02-13 20:16:09 +0000 UTC" firstStartedPulling="2025-02-13 20:16:36.38173805 +0000 UTC m=+40.715788248" lastFinishedPulling="2025-02-13 20:16:41.478546945 +0000 UTC m=+45.812597147" observedRunningTime="2025-02-13 20:16:42.153222534 +0000 UTC m=+46.487272746" watchObservedRunningTime="2025-02-13 20:16:42.362215191 +0000 UTC m=+46.696265401" Feb 13 20:16:43.058350 containerd[1465]: time="2025-02-13T20:16:43.058288675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:43.063431 containerd[1465]: time="2025-02-13T20:16:43.063110719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:16:43.068105 containerd[1465]: time="2025-02-13T20:16:43.066811108Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:43.081154 containerd[1465]: time="2025-02-13T20:16:43.077370462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:43.082198 containerd[1465]: time="2025-02-13T20:16:43.082057911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.603187936s" Feb 13 20:16:43.082198 containerd[1465]: time="2025-02-13T20:16:43.082115042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:16:43.092540 containerd[1465]: time="2025-02-13T20:16:43.092484303Z" level=info msg="CreateContainer within sandbox \"5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:16:43.117016 containerd[1465]: time="2025-02-13T20:16:43.116960811Z" level=info msg="CreateContainer within sandbox \"5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"46b931cf39ef67554d7ab2d1b2f16e48f259d009f6bcc6e060eb4b374bcd1c16\"" Feb 13 20:16:43.118447 containerd[1465]: time="2025-02-13T20:16:43.118408233Z" level=info msg="StartContainer for \"46b931cf39ef67554d7ab2d1b2f16e48f259d009f6bcc6e060eb4b374bcd1c16\"" Feb 13 20:16:43.158032 kubelet[2545]: I0213 20:16:43.157984 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:43.197391 systemd[1]: Started cri-containerd-46b931cf39ef67554d7ab2d1b2f16e48f259d009f6bcc6e060eb4b374bcd1c16.scope - libcontainer container 46b931cf39ef67554d7ab2d1b2f16e48f259d009f6bcc6e060eb4b374bcd1c16. Feb 13 20:16:43.241177 containerd[1465]: time="2025-02-13T20:16:43.241099673Z" level=info msg="StartContainer for \"46b931cf39ef67554d7ab2d1b2f16e48f259d009f6bcc6e060eb4b374bcd1c16\" returns successfully" Feb 13 20:16:43.927729 kubelet[2545]: I0213 20:16:43.927602 2545 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:16:43.927729 kubelet[2545]: I0213 20:16:43.927670 2545 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:16:44.180185 kubelet[2545]: I0213 20:16:44.179966 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vtgvf" podStartSLOduration=26.47843889 podStartE2EDuration="35.179940878s" podCreationTimestamp="2025-02-13 20:16:09 +0000 UTC" firstStartedPulling="2025-02-13 20:16:34.38299203 +0000 UTC m=+38.717042228" lastFinishedPulling="2025-02-13 20:16:43.084494027 +0000 UTC m=+47.418544216" observedRunningTime="2025-02-13 20:16:44.17855601 +0000 UTC m=+48.512606220" watchObservedRunningTime="2025-02-13 20:16:44.179940878 +0000 UTC m=+48.513991086" Feb 13 20:16:45.956531 systemd[1]: Started sshd@8-10.128.0.47:22-139.178.89.65:60514.service - OpenSSH per-connection server daemon (139.178.89.65:60514). Feb 13 20:16:46.249590 sshd[4906]: Accepted publickey for core from 139.178.89.65 port 60514 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:16:46.251409 sshd[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:46.257192 systemd-logind[1453]: New session 8 of user core. Feb 13 20:16:46.263340 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:16:46.597206 sshd[4906]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:46.603035 systemd[1]: sshd@8-10.128.0.47:22-139.178.89.65:60514.service: Deactivated successfully. Feb 13 20:16:46.605640 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:16:46.607670 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:16:46.609412 systemd-logind[1453]: Removed session 8. Feb 13 20:16:47.830759 kubelet[2545]: I0213 20:16:47.830679 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:49.865439 systemd[1]: run-containerd-runc-k8s.io-f6f5b3cb2660c1417b8ee5ee7eef259af676b1bce47272e82cc5f307596a5e16-runc.2BnDTe.mount: Deactivated successfully. Feb 13 20:16:51.653534 systemd[1]: Started sshd@9-10.128.0.47:22-139.178.89.65:60530.service - OpenSSH per-connection server daemon (139.178.89.65:60530). Feb 13 20:16:51.938016 sshd[4990]: Accepted publickey for core from 139.178.89.65 port 60530 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:16:51.939907 sshd[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:51.946779 systemd-logind[1453]: New session 9 of user core. Feb 13 20:16:51.956398 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:16:52.226455 sshd[4990]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:52.232354 systemd[1]: sshd@9-10.128.0.47:22-139.178.89.65:60530.service: Deactivated successfully. Feb 13 20:16:52.235417 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:16:52.236409 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:16:52.238057 systemd-logind[1453]: Removed session 9. Feb 13 20:16:55.837951 containerd[1465]: time="2025-02-13T20:16:55.837899914Z" level=info msg="StopPodSandbox for \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\"" Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.886 [WARNING][5022] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"55870946-44e5-4646-b49c-964c3d25ad4a", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01", Pod:"csi-node-driver-vtgvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e149c17285", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.886 [INFO][5022] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.886 [INFO][5022] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" iface="eth0" netns="" Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.886 [INFO][5022] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.886 [INFO][5022] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.919 [INFO][5028] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" HandleID="k8s-pod-network.95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.919 [INFO][5028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.919 [INFO][5028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.934 [WARNING][5028] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" HandleID="k8s-pod-network.95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.934 [INFO][5028] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" HandleID="k8s-pod-network.95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.936 [INFO][5028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:55.940508 containerd[1465]: 2025-02-13 20:16:55.938 [INFO][5022] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:55.940508 containerd[1465]: time="2025-02-13T20:16:55.940358460Z" level=info msg="TearDown network for sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\" successfully" Feb 13 20:16:55.940508 containerd[1465]: time="2025-02-13T20:16:55.940409420Z" level=info msg="StopPodSandbox for \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\" returns successfully" Feb 13 20:16:55.943202 containerd[1465]: time="2025-02-13T20:16:55.941965961Z" level=info msg="RemovePodSandbox for \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\"" Feb 13 20:16:55.943202 containerd[1465]: time="2025-02-13T20:16:55.942005331Z" level=info msg="Forcibly stopping sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\"" Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:55.989 [WARNING][5046] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"55870946-44e5-4646-b49c-964c3d25ad4a", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"5fd6393105a871630f98c8f47eeb022ed170fd4e00b7ebb2e1c1e66946d56e01", Pod:"csi-node-driver-vtgvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e149c17285", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:55.989 [INFO][5046] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:55.989 [INFO][5046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" iface="eth0" netns="" Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:55.989 [INFO][5046] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:55.989 [INFO][5046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:56.013 [INFO][5052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" HandleID="k8s-pod-network.95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:56.013 [INFO][5052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:56.013 [INFO][5052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:56.022 [WARNING][5052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" HandleID="k8s-pod-network.95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:56.022 [INFO][5052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" HandleID="k8s-pod-network.95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-csi--node--driver--vtgvf-eth0" Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:56.024 [INFO][5052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:56.027506 containerd[1465]: 2025-02-13 20:16:56.026 [INFO][5046] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc" Feb 13 20:16:56.027506 containerd[1465]: time="2025-02-13T20:16:56.027346773Z" level=info msg="TearDown network for sandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\" successfully" Feb 13 20:16:56.031893 containerd[1465]: time="2025-02-13T20:16:56.031820003Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:56.034440 containerd[1465]: time="2025-02-13T20:16:56.031913936Z" level=info msg="RemovePodSandbox \"95a006b6b0a6e7690ac3ec1863984597d8a534b7b175c1a7ad30ac9e6dab69dc\" returns successfully" Feb 13 20:16:56.034440 containerd[1465]: time="2025-02-13T20:16:56.033963276Z" level=info msg="StopPodSandbox for \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\"" Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.092 [WARNING][5070] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0", GenerateName:"calico-apiserver-645b8d968-", Namespace:"calico-apiserver", SelfLink:"", UID:"c63b30a8-8e62-4267-9609-912d1a8617c5", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"645b8d968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6", Pod:"calico-apiserver-645b8d968-55plt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcda961bbee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.093 [INFO][5070] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.093 [INFO][5070] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" iface="eth0" netns="" Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.093 [INFO][5070] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.093 [INFO][5070] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.118 [INFO][5076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" HandleID="k8s-pod-network.61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.118 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.118 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.128 [WARNING][5076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" HandleID="k8s-pod-network.61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.128 [INFO][5076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" HandleID="k8s-pod-network.61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.131 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:56.137262 containerd[1465]: 2025-02-13 20:16:56.133 [INFO][5070] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:56.137262 containerd[1465]: time="2025-02-13T20:16:56.137003573Z" level=info msg="TearDown network for sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\" successfully" Feb 13 20:16:56.137262 containerd[1465]: time="2025-02-13T20:16:56.137056638Z" level=info msg="StopPodSandbox for \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\" returns successfully" Feb 13 20:16:56.140295 containerd[1465]: time="2025-02-13T20:16:56.138776179Z" level=info msg="RemovePodSandbox for \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\"" Feb 13 20:16:56.140295 containerd[1465]: time="2025-02-13T20:16:56.138858190Z" level=info msg="Forcibly stopping sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\"" Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.183 [WARNING][5094] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0", GenerateName:"calico-apiserver-645b8d968-", Namespace:"calico-apiserver", SelfLink:"", UID:"c63b30a8-8e62-4267-9609-912d1a8617c5", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"645b8d968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"0762fb99debf0c3d2902d18389fe21388e7b6b3745663736b2c20bb2273afae6", Pod:"calico-apiserver-645b8d968-55plt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcda961bbee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.184 [INFO][5094] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.184 [INFO][5094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" iface="eth0" netns="" Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.184 [INFO][5094] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.184 [INFO][5094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.230 [INFO][5100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" HandleID="k8s-pod-network.61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.230 [INFO][5100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.230 [INFO][5100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.240 [WARNING][5100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" HandleID="k8s-pod-network.61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.240 [INFO][5100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" HandleID="k8s-pod-network.61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--55plt-eth0" Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.242 [INFO][5100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:56.246325 containerd[1465]: 2025-02-13 20:16:56.243 [INFO][5094] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9" Feb 13 20:16:56.246325 containerd[1465]: time="2025-02-13T20:16:56.245751306Z" level=info msg="TearDown network for sandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\" successfully" Feb 13 20:16:56.252354 containerd[1465]: time="2025-02-13T20:16:56.252284603Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:56.252479 containerd[1465]: time="2025-02-13T20:16:56.252380495Z" level=info msg="RemovePodSandbox \"61ad10fd325f884b8d09aac1e1919ef109e806a99a76a91c615086660e82d8b9\" returns successfully" Feb 13 20:16:56.253091 containerd[1465]: time="2025-02-13T20:16:56.253056077Z" level=info msg="StopPodSandbox for \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\"" Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.296 [WARNING][5120] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0", GenerateName:"calico-kube-controllers-658869675d-", Namespace:"calico-system", SelfLink:"", UID:"c79cadd8-8457-48ba-9385-1ff5bfefcfc8", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"658869675d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2", Pod:"calico-kube-controllers-658869675d-mqtbl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali920e22d985e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.297 [INFO][5120] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.297 [INFO][5120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" iface="eth0" netns="" Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.297 [INFO][5120] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.297 [INFO][5120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.322 [INFO][5126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" HandleID="k8s-pod-network.172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.323 [INFO][5126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.323 [INFO][5126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.335 [WARNING][5126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" HandleID="k8s-pod-network.172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.335 [INFO][5126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" HandleID="k8s-pod-network.172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.337 [INFO][5126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:56.339793 containerd[1465]: 2025-02-13 20:16:56.338 [INFO][5120] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:56.340652 containerd[1465]: time="2025-02-13T20:16:56.339808034Z" level=info msg="TearDown network for sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\" successfully" Feb 13 20:16:56.340652 containerd[1465]: time="2025-02-13T20:16:56.339843507Z" level=info msg="StopPodSandbox for \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\" returns successfully" Feb 13 20:16:56.341006 containerd[1465]: time="2025-02-13T20:16:56.340928296Z" level=info msg="RemovePodSandbox for \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\"" Feb 13 20:16:56.341006 containerd[1465]: time="2025-02-13T20:16:56.340969233Z" level=info msg="Forcibly stopping sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\"" Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.388 [WARNING][5144] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0", GenerateName:"calico-kube-controllers-658869675d-", Namespace:"calico-system", SelfLink:"", UID:"c79cadd8-8457-48ba-9385-1ff5bfefcfc8", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"658869675d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"8f40dc1656f18bcc55e839bc8e89b378d7d1b7c89b066ae85ba38d35025d07d2", Pod:"calico-kube-controllers-658869675d-mqtbl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali920e22d985e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.388 [INFO][5144] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.388 [INFO][5144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" iface="eth0" netns="" Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.388 [INFO][5144] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.388 [INFO][5144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.413 [INFO][5150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" HandleID="k8s-pod-network.172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.413 [INFO][5150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.413 [INFO][5150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.421 [WARNING][5150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" HandleID="k8s-pod-network.172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.422 [INFO][5150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" HandleID="k8s-pod-network.172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--kube--controllers--658869675d--mqtbl-eth0" Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.423 [INFO][5150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:56.426997 containerd[1465]: 2025-02-13 20:16:56.425 [INFO][5144] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc" Feb 13 20:16:56.426997 containerd[1465]: time="2025-02-13T20:16:56.427000784Z" level=info msg="TearDown network for sandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\" successfully" Feb 13 20:16:56.437175 containerd[1465]: time="2025-02-13T20:16:56.437107948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:56.437526 containerd[1465]: time="2025-02-13T20:16:56.437365007Z" level=info msg="RemovePodSandbox \"172e0237118fbd610ce2e53b9d23264b7816efb416b312d7daed44ff3122a6bc\" returns successfully" Feb 13 20:16:56.438378 containerd[1465]: time="2025-02-13T20:16:56.438317221Z" level=info msg="StopPodSandbox for \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\"" Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.486 [WARNING][5168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ab60a360-887a-466f-9f36-830c771a9b75", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c", Pod:"coredns-6f6b679f8f-v9hwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali742bfec23bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.486 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.486 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" iface="eth0" netns="" Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.486 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.486 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.512 [INFO][5174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" HandleID="k8s-pod-network.748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.513 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.513 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.522 [WARNING][5174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" HandleID="k8s-pod-network.748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.522 [INFO][5174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" HandleID="k8s-pod-network.748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.524 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:56.529527 containerd[1465]: 2025-02-13 20:16:56.527 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:56.529527 containerd[1465]: time="2025-02-13T20:16:56.529421180Z" level=info msg="TearDown network for sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\" successfully" Feb 13 20:16:56.529527 containerd[1465]: time="2025-02-13T20:16:56.529454626Z" level=info msg="StopPodSandbox for \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\" returns successfully" Feb 13 20:16:56.530752 containerd[1465]: time="2025-02-13T20:16:56.530297316Z" level=info msg="RemovePodSandbox for \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\"" Feb 13 20:16:56.530752 containerd[1465]: time="2025-02-13T20:16:56.530335447Z" level=info msg="Forcibly stopping sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\"" Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.591 [WARNING][5193] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ab60a360-887a-466f-9f36-830c771a9b75", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"a4feea5e5861cbc5b319f064f37ad028920a2a3c6f5e5d7fad2ebfd821c3785c", Pod:"coredns-6f6b679f8f-v9hwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali742bfec23bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.591 [INFO][5193] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.591 [INFO][5193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" iface="eth0" netns="" Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.591 [INFO][5193] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.592 [INFO][5193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.622 [INFO][5199] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" HandleID="k8s-pod-network.748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.622 [INFO][5199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.623 [INFO][5199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.631 [WARNING][5199] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" HandleID="k8s-pod-network.748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.631 [INFO][5199] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" HandleID="k8s-pod-network.748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--v9hwk-eth0" Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.633 [INFO][5199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:56.636150 containerd[1465]: 2025-02-13 20:16:56.634 [INFO][5193] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d" Feb 13 20:16:56.636978 containerd[1465]: time="2025-02-13T20:16:56.636196029Z" level=info msg="TearDown network for sandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\" successfully" Feb 13 20:16:56.640378 containerd[1465]: time="2025-02-13T20:16:56.640305609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:56.640585 containerd[1465]: time="2025-02-13T20:16:56.640401753Z" level=info msg="RemovePodSandbox \"748c4fbc4cc7322dc58dff29d021304ef8a1fe6f29c28c6f0f1ffbb2eb503e1d\" returns successfully" Feb 13 20:16:56.641026 containerd[1465]: time="2025-02-13T20:16:56.640971038Z" level=info msg="StopPodSandbox for \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\"" Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.688 [WARNING][5217] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0", GenerateName:"calico-apiserver-645b8d968-", Namespace:"calico-apiserver", SelfLink:"", UID:"061fba9c-316a-4909-a848-0cb5a7c86a19", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"645b8d968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786", Pod:"calico-apiserver-645b8d968-sl4b8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali795bdc98f88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.688 [INFO][5217] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.688 [INFO][5217] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" iface="eth0" netns="" Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.688 [INFO][5217] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.688 [INFO][5217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.713 [INFO][5223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" HandleID="k8s-pod-network.19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.714 [INFO][5223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.714 [INFO][5223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.723 [WARNING][5223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" HandleID="k8s-pod-network.19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.723 [INFO][5223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" HandleID="k8s-pod-network.19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.725 [INFO][5223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:56.728370 containerd[1465]: 2025-02-13 20:16:56.727 [INFO][5217] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:56.728370 containerd[1465]: time="2025-02-13T20:16:56.728313419Z" level=info msg="TearDown network for sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\" successfully" Feb 13 20:16:56.728370 containerd[1465]: time="2025-02-13T20:16:56.728350617Z" level=info msg="StopPodSandbox for \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\" returns successfully" Feb 13 20:16:56.731165 containerd[1465]: time="2025-02-13T20:16:56.730076832Z" level=info msg="RemovePodSandbox for \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\"" Feb 13 20:16:56.731165 containerd[1465]: time="2025-02-13T20:16:56.730188309Z" level=info msg="Forcibly stopping sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\"" Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.777 [WARNING][5241] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0", GenerateName:"calico-apiserver-645b8d968-", Namespace:"calico-apiserver", SelfLink:"", UID:"061fba9c-316a-4909-a848-0cb5a7c86a19", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"645b8d968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"eacd8de4c1d9c9ad9cbd1d4d14ad39b4bc45144e735b97aeec738f43019e3786", Pod:"calico-apiserver-645b8d968-sl4b8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali795bdc98f88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.777 [INFO][5241] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.778 [INFO][5241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" iface="eth0" netns="" Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.778 [INFO][5241] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.778 [INFO][5241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.806 [INFO][5247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" HandleID="k8s-pod-network.19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.806 [INFO][5247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.806 [INFO][5247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.814 [WARNING][5247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" HandleID="k8s-pod-network.19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.815 [INFO][5247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" HandleID="k8s-pod-network.19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-calico--apiserver--645b8d968--sl4b8-eth0" Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.816 [INFO][5247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:56.819344 containerd[1465]: 2025-02-13 20:16:56.818 [INFO][5241] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f" Feb 13 20:16:56.819344 containerd[1465]: time="2025-02-13T20:16:56.819287547Z" level=info msg="TearDown network for sandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\" successfully" Feb 13 20:16:56.824170 containerd[1465]: time="2025-02-13T20:16:56.824098780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:56.824361 containerd[1465]: time="2025-02-13T20:16:56.824217619Z" level=info msg="RemovePodSandbox \"19a37afece706debeef70b8751f779618f699a863eb5f7b3411b059259af0b5f\" returns successfully" Feb 13 20:16:56.824834 containerd[1465]: time="2025-02-13T20:16:56.824802048Z" level=info msg="StopPodSandbox for \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\"" Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.870 [WARNING][5266] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf8b9894-04eb-4f05-8268-01b34a155c39", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a", Pod:"coredns-6f6b679f8f-pbzwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida89eb5cfe0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.870 [INFO][5266] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.870 [INFO][5266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" iface="eth0" netns="" Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.870 [INFO][5266] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.870 [INFO][5266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.897 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" HandleID="k8s-pod-network.d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.897 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.897 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.906 [WARNING][5273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" HandleID="k8s-pod-network.d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.906 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" HandleID="k8s-pod-network.d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.907 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:56.910673 containerd[1465]: 2025-02-13 20:16:56.909 [INFO][5266] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:56.912424 containerd[1465]: time="2025-02-13T20:16:56.910693645Z" level=info msg="TearDown network for sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\" successfully" Feb 13 20:16:56.912424 containerd[1465]: time="2025-02-13T20:16:56.910731804Z" level=info msg="StopPodSandbox for \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\" returns successfully" Feb 13 20:16:56.912424 containerd[1465]: time="2025-02-13T20:16:56.911667915Z" level=info msg="RemovePodSandbox for \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\"" Feb 13 20:16:56.912424 containerd[1465]: time="2025-02-13T20:16:56.911706540Z" level=info msg="Forcibly stopping sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\"" Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.959 [WARNING][5291] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf8b9894-04eb-4f05-8268-01b34a155c39", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-004886bd6f06381f96e1.c.flatcar-212911.internal", ContainerID:"e9af7bbab6bb9dc0691f2cae8a677bd04b66eeca750edc7af5898c075449b77a", Pod:"coredns-6f6b679f8f-pbzwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida89eb5cfe0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.960 [INFO][5291] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.960 [INFO][5291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" iface="eth0" netns="" Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.960 [INFO][5291] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.960 [INFO][5291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.987 [INFO][5298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" HandleID="k8s-pod-network.d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.987 [INFO][5298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.987 [INFO][5298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.996 [WARNING][5298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" HandleID="k8s-pod-network.d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.996 [INFO][5298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" HandleID="k8s-pod-network.d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Workload="ci--4081--3--1--004886bd6f06381f96e1.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--pbzwk-eth0" Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.998 [INFO][5298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:57.002571 containerd[1465]: 2025-02-13 20:16:56.999 [INFO][5291] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654" Feb 13 20:16:57.002571 containerd[1465]: time="2025-02-13T20:16:57.000795996Z" level=info msg="TearDown network for sandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\" successfully" Feb 13 20:16:57.006446 containerd[1465]: time="2025-02-13T20:16:57.006370599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:57.006659 containerd[1465]: time="2025-02-13T20:16:57.006472464Z" level=info msg="RemovePodSandbox \"d1eb0f8b46a97e9f95c089cc484e5aebf6ab19b9d9ab8267483b8257f018c654\" returns successfully" Feb 13 20:16:57.283547 systemd[1]: Started sshd@10-10.128.0.47:22-139.178.89.65:57676.service - OpenSSH per-connection server daemon (139.178.89.65:57676). Feb 13 20:16:57.563908 sshd[5305]: Accepted publickey for core from 139.178.89.65 port 57676 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:16:57.565802 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:57.572424 systemd-logind[1453]: New session 10 of user core. Feb 13 20:16:57.581372 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:16:57.868969 sshd[5305]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:57.875057 systemd[1]: sshd@10-10.128.0.47:22-139.178.89.65:57676.service: Deactivated successfully. Feb 13 20:16:57.877563 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:16:57.878806 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:16:57.880477 systemd-logind[1453]: Removed session 10. Feb 13 20:16:57.927772 systemd[1]: Started sshd@11-10.128.0.47:22-139.178.89.65:57680.service - OpenSSH per-connection server daemon (139.178.89.65:57680). Feb 13 20:16:58.211992 sshd[5318]: Accepted publickey for core from 139.178.89.65 port 57680 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:16:58.213877 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:58.221153 systemd-logind[1453]: New session 11 of user core. Feb 13 20:16:58.226338 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:16:58.548185 sshd[5318]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:58.552966 systemd[1]: sshd@11-10.128.0.47:22-139.178.89.65:57680.service: Deactivated successfully. Feb 13 20:16:58.555982 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:16:58.558455 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:16:58.560342 systemd-logind[1453]: Removed session 11. Feb 13 20:16:58.612526 systemd[1]: Started sshd@12-10.128.0.47:22-139.178.89.65:57692.service - OpenSSH per-connection server daemon (139.178.89.65:57692). Feb 13 20:16:58.898717 sshd[5329]: Accepted publickey for core from 139.178.89.65 port 57692 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:16:58.900617 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:58.907374 systemd-logind[1453]: New session 12 of user core. Feb 13 20:16:58.913340 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:16:59.191835 sshd[5329]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:59.197411 systemd[1]: sshd@12-10.128.0.47:22-139.178.89.65:57692.service: Deactivated successfully. Feb 13 20:16:59.200156 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:16:59.201589 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:16:59.203260 systemd-logind[1453]: Removed session 12. Feb 13 20:17:04.248598 systemd[1]: Started sshd@13-10.128.0.47:22-139.178.89.65:57698.service - OpenSSH per-connection server daemon (139.178.89.65:57698). Feb 13 20:17:04.538483 sshd[5343]: Accepted publickey for core from 139.178.89.65 port 57698 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:04.540487 sshd[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:04.547241 systemd-logind[1453]: New session 13 of user core. Feb 13 20:17:04.552382 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:17:04.839219 sshd[5343]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:04.850046 systemd[1]: sshd@13-10.128.0.47:22-139.178.89.65:57698.service: Deactivated successfully. Feb 13 20:17:04.853058 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:17:04.854538 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:17:04.856020 systemd-logind[1453]: Removed session 13. Feb 13 20:17:09.894553 systemd[1]: Started sshd@14-10.128.0.47:22-139.178.89.65:36674.service - OpenSSH per-connection server daemon (139.178.89.65:36674). Feb 13 20:17:10.182220 sshd[5361]: Accepted publickey for core from 139.178.89.65 port 36674 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:10.184048 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:10.190787 systemd-logind[1453]: New session 14 of user core. Feb 13 20:17:10.197370 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:17:10.480031 sshd[5361]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:10.485448 systemd[1]: sshd@14-10.128.0.47:22-139.178.89.65:36674.service: Deactivated successfully. Feb 13 20:17:10.488687 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:17:10.490987 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:17:10.492620 systemd-logind[1453]: Removed session 14. Feb 13 20:17:15.536845 systemd[1]: Started sshd@15-10.128.0.47:22-139.178.89.65:57694.service - OpenSSH per-connection server daemon (139.178.89.65:57694). Feb 13 20:17:15.821902 sshd[5380]: Accepted publickey for core from 139.178.89.65 port 57694 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:15.824536 sshd[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:15.834414 systemd-logind[1453]: New session 15 of user core. Feb 13 20:17:15.838873 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:17:16.112211 sshd[5380]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:16.118169 systemd[1]: sshd@15-10.128.0.47:22-139.178.89.65:57694.service: Deactivated successfully. Feb 13 20:17:16.120845 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:17:16.121940 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:17:16.123304 systemd-logind[1453]: Removed session 15. Feb 13 20:17:17.864828 systemd[1]: run-containerd-runc-k8s.io-9fcd390c84bb3d4fe2184f3c2da4291279080e84b7599c20680395ac3308c67f-runc.cvvzea.mount: Deactivated successfully. Feb 13 20:17:21.177255 systemd[1]: Started sshd@16-10.128.0.47:22-139.178.89.65:57708.service - OpenSSH per-connection server daemon (139.178.89.65:57708). Feb 13 20:17:21.467967 sshd[5436]: Accepted publickey for core from 139.178.89.65 port 57708 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:21.469976 sshd[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:21.476906 systemd-logind[1453]: New session 16 of user core. Feb 13 20:17:21.479405 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:17:21.763920 sshd[5436]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:21.770418 systemd[1]: sshd@16-10.128.0.47:22-139.178.89.65:57708.service: Deactivated successfully. Feb 13 20:17:21.773130 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:17:21.774425 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:17:21.776292 systemd-logind[1453]: Removed session 16. Feb 13 20:17:21.821605 systemd[1]: Started sshd@17-10.128.0.47:22-139.178.89.65:57718.service - OpenSSH per-connection server daemon (139.178.89.65:57718). Feb 13 20:17:22.109240 sshd[5449]: Accepted publickey for core from 139.178.89.65 port 57718 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:22.110766 sshd[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:22.121726 systemd-logind[1453]: New session 17 of user core. Feb 13 20:17:22.125369 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:17:22.510943 sshd[5449]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:22.516111 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:17:22.518196 systemd[1]: sshd@17-10.128.0.47:22-139.178.89.65:57718.service: Deactivated successfully. Feb 13 20:17:22.524195 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:17:22.528034 systemd-logind[1453]: Removed session 17. Feb 13 20:17:22.568917 systemd[1]: Started sshd@18-10.128.0.47:22-139.178.89.65:57728.service - OpenSSH per-connection server daemon (139.178.89.65:57728). Feb 13 20:17:22.860453 sshd[5460]: Accepted publickey for core from 139.178.89.65 port 57728 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:22.862641 sshd[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:22.869258 systemd-logind[1453]: New session 18 of user core. Feb 13 20:17:22.874586 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:17:23.772476 systemd[1]: run-containerd-runc-k8s.io-9fcd390c84bb3d4fe2184f3c2da4291279080e84b7599c20680395ac3308c67f-runc.n78orS.mount: Deactivated successfully. Feb 13 20:17:25.387513 sshd[5460]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:25.396703 systemd[1]: sshd@18-10.128.0.47:22-139.178.89.65:57728.service: Deactivated successfully. Feb 13 20:17:25.403084 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:17:25.408567 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:17:25.411012 systemd-logind[1453]: Removed session 18. Feb 13 20:17:25.446874 systemd[1]: Started sshd@19-10.128.0.47:22-139.178.89.65:39300.service - OpenSSH per-connection server daemon (139.178.89.65:39300). Feb 13 20:17:25.745042 sshd[5496]: Accepted publickey for core from 139.178.89.65 port 39300 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:25.746011 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:25.755200 systemd-logind[1453]: New session 19 of user core. Feb 13 20:17:25.762437 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:17:26.197969 sshd[5496]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:26.208219 systemd[1]: sshd@19-10.128.0.47:22-139.178.89.65:39300.service: Deactivated successfully. Feb 13 20:17:26.212919 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:17:26.214452 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:17:26.216518 systemd-logind[1453]: Removed session 19. Feb 13 20:17:26.255795 systemd[1]: Started sshd@20-10.128.0.47:22-139.178.89.65:39310.service - OpenSSH per-connection server daemon (139.178.89.65:39310). Feb 13 20:17:26.545996 sshd[5507]: Accepted publickey for core from 139.178.89.65 port 39310 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:26.548078 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:26.554721 systemd-logind[1453]: New session 20 of user core. Feb 13 20:17:26.566391 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:17:26.838342 sshd[5507]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:26.843367 systemd[1]: sshd@20-10.128.0.47:22-139.178.89.65:39310.service: Deactivated successfully. Feb 13 20:17:26.845894 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:17:26.848091 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:17:26.849622 systemd-logind[1453]: Removed session 20. Feb 13 20:17:31.895558 systemd[1]: Started sshd@21-10.128.0.47:22-139.178.89.65:39314.service - OpenSSH per-connection server daemon (139.178.89.65:39314). Feb 13 20:17:32.181646 sshd[5520]: Accepted publickey for core from 139.178.89.65 port 39314 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:32.183547 sshd[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:32.190508 systemd-logind[1453]: New session 21 of user core. Feb 13 20:17:32.199384 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:17:32.466997 sshd[5520]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:32.471767 systemd[1]: sshd@21-10.128.0.47:22-139.178.89.65:39314.service: Deactivated successfully. Feb 13 20:17:32.474460 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:17:32.476519 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:17:32.478420 systemd-logind[1453]: Removed session 21. Feb 13 20:17:37.526551 systemd[1]: Started sshd@22-10.128.0.47:22-139.178.89.65:46438.service - OpenSSH per-connection server daemon (139.178.89.65:46438). Feb 13 20:17:37.810989 sshd[5537]: Accepted publickey for core from 139.178.89.65 port 46438 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:37.812896 sshd[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:37.819627 systemd-logind[1453]: New session 22 of user core. Feb 13 20:17:37.823368 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:17:38.097464 sshd[5537]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:38.103533 systemd[1]: sshd@22-10.128.0.47:22-139.178.89.65:46438.service: Deactivated successfully. Feb 13 20:17:38.106073 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:17:38.107068 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:17:38.108723 systemd-logind[1453]: Removed session 22. Feb 13 20:17:43.155536 systemd[1]: Started sshd@23-10.128.0.47:22-139.178.89.65:46450.service - OpenSSH per-connection server daemon (139.178.89.65:46450). Feb 13 20:17:43.451572 sshd[5550]: Accepted publickey for core from 139.178.89.65 port 46450 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:43.453503 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:43.460252 systemd-logind[1453]: New session 23 of user core. Feb 13 20:17:43.465334 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:17:43.739226 sshd[5550]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:43.744098 systemd[1]: sshd@23-10.128.0.47:22-139.178.89.65:46450.service: Deactivated successfully. Feb 13 20:17:43.747159 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:17:43.749525 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:17:43.751503 systemd-logind[1453]: Removed session 23. Feb 13 20:17:48.796533 systemd[1]: Started sshd@24-10.128.0.47:22-139.178.89.65:56530.service - OpenSSH per-connection server daemon (139.178.89.65:56530). Feb 13 20:17:49.080903 sshd[5581]: Accepted publickey for core from 139.178.89.65 port 56530 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:17:49.082779 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:49.089164 systemd-logind[1453]: New session 24 of user core. Feb 13 20:17:49.096321 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:17:49.370649 sshd[5581]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:49.376557 systemd[1]: sshd@24-10.128.0.47:22-139.178.89.65:56530.service: Deactivated successfully. Feb 13 20:17:49.382045 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:17:49.387028 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:17:49.390537 systemd-logind[1453]: Removed session 24.