Jan 13 21:26:47.089993 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:26:47.090038 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:26:47.090056 kernel: BIOS-provided physical RAM map: Jan 13 21:26:47.090069 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 21:26:47.090083 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 21:26:47.090096 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 21:26:47.090112 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 21:26:47.090132 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 21:26:47.090147 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 13 21:26:47.090162 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 13 21:26:47.090178 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 13 21:26:47.090193 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 13 21:26:47.090207 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 21:26:47.090223 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 21:26:47.090245 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 21:26:47.090262 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 21:26:47.090278 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 21:26:47.090295 kernel: NX (Execute Disable) protection: active Jan 13 21:26:47.090311 kernel: APIC: Static calls initialized Jan 13 21:26:47.090328 kernel: efi: EFI v2.7 by EDK II Jan 13 21:26:47.090345 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 13 21:26:47.090362 kernel: SMBIOS 2.4 present. Jan 13 21:26:47.090379 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 21:26:47.090395 kernel: Hypervisor detected: KVM Jan 13 21:26:47.090415 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:26:47.090431 kernel: kvm-clock: using sched offset of 11955698206 cycles Jan 13 21:26:47.090448 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:26:47.090473 kernel: tsc: Detected 2299.998 MHz processor Jan 13 21:26:47.090490 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:26:47.090507 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:26:47.090524 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 21:26:47.090542 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 21:26:47.090559 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:26:47.090580 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 21:26:47.090597 kernel: Using GB pages for direct mapping Jan 13 21:26:47.090613 kernel: Secure boot disabled Jan 13 21:26:47.090631 kernel: ACPI: Early table checksum verification disabled Jan 13 21:26:47.090647 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 21:26:47.090676 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 21:26:47.090706 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 21:26:47.090731 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 21:26:47.090752 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 21:26:47.090771 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 21:26:47.090789 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 21:26:47.090807 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 21:26:47.090826 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 21:26:47.090844 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 21:26:47.090865 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 21:26:47.090884 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 21:26:47.090902 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 21:26:47.090920 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 21:26:47.090938 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 21:26:47.090956 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 21:26:47.090975 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 21:26:47.090992 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 21:26:47.091010 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 21:26:47.091032 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 21:26:47.091050 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:26:47.091068 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:26:47.091086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:26:47.091104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 21:26:47.091122 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 21:26:47.091141 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 21:26:47.091159 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 21:26:47.091177 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 13 21:26:47.091199 kernel: Zone ranges: Jan 13 21:26:47.091217 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:26:47.091235 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:26:47.091253 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:26:47.091271 kernel: Movable zone start for each node Jan 13 21:26:47.091289 kernel: Early memory node ranges Jan 13 21:26:47.091307 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 21:26:47.091325 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 21:26:47.091342 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 13 21:26:47.091364 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 21:26:47.091382 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:26:47.091400 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 21:26:47.091417 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:26:47.091434 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 21:26:47.091468 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 21:26:47.091486 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 21:26:47.091503 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 21:26:47.091520 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:26:47.091537 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:26:47.091557 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:26:47.091576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:26:47.091594 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:26:47.091612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:26:47.091630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:26:47.091649 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:26:47.091690 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:26:47.091707 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:26:47.091726 kernel: Booting paravirtualized kernel on KVM Jan 13 21:26:47.091742 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:26:47.091757 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:26:47.091772 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:26:47.091787 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:26:47.091801 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:26:47.091815 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:26:47.091831 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:26:47.091849 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:26:47.091871 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:26:47.091888 kernel: random: crng init done Jan 13 21:26:47.091904 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 21:26:47.091921 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:26:47.091936 kernel: Fallback order for Node 0: 0 Jan 13 21:26:47.091953 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 13 21:26:47.091970 kernel: Policy zone: Normal Jan 13 21:26:47.091987 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:26:47.092003 kernel: software IO TLB: area num 2. Jan 13 21:26:47.092044 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Jan 13 21:26:47.092062 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:26:47.092079 kernel: Kernel/User page tables isolation: enabled Jan 13 21:26:47.092096 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:26:47.092114 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:26:47.092131 kernel: Dynamic Preempt: voluntary Jan 13 21:26:47.092149 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:26:47.092167 kernel: rcu: RCU event tracing is enabled. Jan 13 21:26:47.092203 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:26:47.092221 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:26:47.092239 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:26:47.092261 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:26:47.092278 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:26:47.092297 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:26:47.092315 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:26:47.092333 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:26:47.092352 kernel: Console: colour dummy device 80x25 Jan 13 21:26:47.092374 kernel: printk: console [ttyS0] enabled Jan 13 21:26:47.092393 kernel: ACPI: Core revision 20230628 Jan 13 21:26:47.092411 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:26:47.092429 kernel: x2apic enabled Jan 13 21:26:47.092447 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:26:47.092474 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 21:26:47.092493 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:26:47.092512 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 21:26:47.092534 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 21:26:47.092552 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 21:26:47.092571 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:26:47.092589 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:26:47.092607 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:26:47.092625 kernel: Spectre V2 : Mitigation: IBRS Jan 13 21:26:47.092643 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:26:47.092675 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:26:47.092694 kernel: RETBleed: Mitigation: IBRS Jan 13 21:26:47.092717 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:26:47.092735 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 21:26:47.092754 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:26:47.092772 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 21:26:47.092790 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:26:47.092808 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:26:47.092826 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:26:47.092844 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:26:47.092862 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:26:47.092884 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 21:26:47.092903 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:26:47.092920 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:26:47.092939 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:26:47.092957 kernel: landlock: Up and running. Jan 13 21:26:47.092976 kernel: SELinux: Initializing. Jan 13 21:26:47.092994 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:26:47.093012 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:26:47.093031 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 21:26:47.093053 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:26:47.093072 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:26:47.093090 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:26:47.093109 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 21:26:47.093127 kernel: signal: max sigframe size: 1776 Jan 13 21:26:47.093145 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:26:47.093164 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:26:47.093181 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:26:47.093199 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:26:47.093221 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:26:47.093239 kernel: .... node #0, CPUs: #1 Jan 13 21:26:47.093258 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:26:47.093286 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:26:47.093304 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:26:47.093319 kernel: smpboot: Max logical packages: 1 Jan 13 21:26:47.093336 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 21:26:47.093355 kernel: devtmpfs: initialized Jan 13 21:26:47.093379 kernel: x86/mm: Memory block size: 128MB Jan 13 21:26:47.093398 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 21:26:47.093417 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:26:47.093436 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:26:47.093463 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:26:47.093482 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:26:47.093501 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:26:47.093520 kernel: audit: type=2000 audit(1736803605.740:1): state=initialized audit_enabled=0 res=1 Jan 13 21:26:47.093539 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:26:47.093562 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:26:47.093580 kernel: cpuidle: using governor menu Jan 13 21:26:47.093599 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:26:47.093617 kernel: dca service started, version 1.12.1 Jan 13 21:26:47.093635 kernel: PCI: Using configuration type 1 for base access Jan 13 21:26:47.093653 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:26:47.093704 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:26:47.093724 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:26:47.093742 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:26:47.093766 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:26:47.093784 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:26:47.093803 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:26:47.093821 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:26:47.093839 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:26:47.093857 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:26:47.093875 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:26:47.093894 kernel: ACPI: Interpreter enabled Jan 13 21:26:47.093912 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:26:47.093935 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:26:47.093954 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:26:47.093972 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 21:26:47.093991 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:26:47.094010 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:26:47.094261 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:26:47.094467 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:26:47.094692 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:26:47.094718 kernel: PCI host bridge to bus 0000:00 Jan 13 21:26:47.094902 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:26:47.095071 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:26:47.095238 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:26:47.095400 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 21:26:47.095573 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:26:47.095804 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:26:47.096019 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 21:26:47.096212 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:26:47.096405 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:26:47.096607 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 21:26:47.096834 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 21:26:47.097032 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 21:26:47.097227 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:26:47.097416 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 21:26:47.097615 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 21:26:47.097843 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:26:47.098029 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 21:26:47.098211 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 21:26:47.098242 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:26:47.098262 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:26:47.098280 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:26:47.098297 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:26:47.098317 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:26:47.098337 kernel: iommu: Default domain type: Translated Jan 13 21:26:47.098356 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:26:47.098375 kernel: efivars: Registered efivars operations Jan 13 21:26:47.098395 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:26:47.098420 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:26:47.098439 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 21:26:47.098466 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 21:26:47.098485 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 21:26:47.098504 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 21:26:47.098524 kernel: vgaarb: loaded Jan 13 21:26:47.098544 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:26:47.098563 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:26:47.098583 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:26:47.098606 kernel: pnp: PnP ACPI init Jan 13 21:26:47.098626 kernel: pnp: PnP ACPI: found 7 devices Jan 13 21:26:47.098646 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:26:47.098690 kernel: NET: Registered PF_INET protocol family Jan 13 21:26:47.098709 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:26:47.098729 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 21:26:47.098749 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:26:47.098769 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:26:47.098789 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 21:26:47.098813 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 21:26:47.098832 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:26:47.098852 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:26:47.098872 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:26:47.098892 kernel: NET: Registered PF_XDP protocol family Jan 13 21:26:47.099074 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:26:47.099240 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:26:47.099401 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:26:47.099581 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 21:26:47.099798 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:26:47.099825 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:26:47.099840 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:26:47.099863 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 21:26:47.099887 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:26:47.099910 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:26:47.099930 kernel: clocksource: Switched to clocksource tsc Jan 13 21:26:47.099955 kernel: Initialise system trusted keyrings Jan 13 21:26:47.099974 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 21:26:47.099993 kernel: Key type asymmetric registered Jan 13 21:26:47.100012 kernel: Asymmetric key parser 'x509' registered Jan 13 21:26:47.100030 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:26:47.100050 kernel: io scheduler mq-deadline registered Jan 13 21:26:47.100069 kernel: io scheduler kyber registered Jan 13 21:26:47.100087 kernel: io scheduler bfq registered Jan 13 21:26:47.100107 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:26:47.100130 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:26:47.100326 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 21:26:47.100350 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 21:26:47.100542 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 21:26:47.100566 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:26:47.100773 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 21:26:47.100797 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:26:47.100817 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:26:47.100836 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 21:26:47.100861 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 21:26:47.100880 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 21:26:47.101063 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 21:26:47.101088 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:26:47.101107 kernel: i8042: Warning: Keylock active Jan 13 21:26:47.101126 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:26:47.101145 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:26:47.101328 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:26:47.101510 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:26:47.101720 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:26:46 UTC (1736803606) Jan 13 21:26:47.101889 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:26:47.101912 kernel: intel_pstate: CPU model not supported Jan 13 21:26:47.101931 kernel: pstore: Using crash dump compression: deflate Jan 13 21:26:47.101950 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:26:47.101969 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:26:47.101988 kernel: Segment Routing with IPv6 Jan 13 21:26:47.102012 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:26:47.102031 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:26:47.102049 kernel: Key type dns_resolver registered Jan 13 21:26:47.102068 kernel: IPI shorthand broadcast: enabled Jan 13 21:26:47.102087 kernel: sched_clock: Marking stable (842004563, 146284346)->(1012873386, -24584477) Jan 13 21:26:47.102106 kernel: registered taskstats version 1 Jan 13 21:26:47.102125 kernel: Loading compiled-in X.509 certificates Jan 13 21:26:47.102144 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:26:47.102163 kernel: Key type .fscrypt registered Jan 13 21:26:47.102186 kernel: Key type fscrypt-provisioning registered Jan 13 21:26:47.102205 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:26:47.102224 kernel: ima: No architecture policies found Jan 13 21:26:47.102243 kernel: clk: Disabling unused clocks Jan 13 21:26:47.102261 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:26:47.102280 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:26:47.102299 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:26:47.102318 kernel: Run /init as init process Jan 13 21:26:47.102341 kernel: with arguments: Jan 13 21:26:47.102359 kernel: /init Jan 13 21:26:47.102378 kernel: with environment: Jan 13 21:26:47.102396 kernel: HOME=/ Jan 13 21:26:47.102414 kernel: TERM=linux Jan 13 21:26:47.102433 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:26:47.102458 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:26:47.102481 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:26:47.102508 systemd[1]: Detected virtualization google. Jan 13 21:26:47.102528 systemd[1]: Detected architecture x86-64. Jan 13 21:26:47.102548 systemd[1]: Running in initrd. Jan 13 21:26:47.102567 systemd[1]: No hostname configured, using default hostname. Jan 13 21:26:47.102586 systemd[1]: Hostname set to . Jan 13 21:26:47.102607 systemd[1]: Initializing machine ID from random generator. Jan 13 21:26:47.102627 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:26:47.102647 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:26:47.102694 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:26:47.102716 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:26:47.102736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:26:47.102756 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:26:47.102776 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:26:47.102799 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:26:47.102819 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:26:47.102845 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:26:47.102865 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:26:47.102905 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:26:47.102930 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:26:47.102951 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:26:47.102972 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:26:47.102996 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:26:47.103017 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:26:47.103038 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:26:47.103059 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:26:47.103080 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:26:47.103101 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:26:47.103122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:26:47.103142 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:26:47.103163 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:26:47.103188 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:26:47.103209 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:26:47.103229 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:26:47.103250 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:26:47.103270 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:26:47.103291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:47.103312 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:26:47.103363 systemd-journald[183]: Collecting audit messages is disabled. Jan 13 21:26:47.103410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:26:47.103431 systemd-journald[183]: Journal started Jan 13 21:26:47.103483 systemd-journald[183]: Runtime Journal (/run/log/journal/c3007c4d21ff41d6921488bba7716eac) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:26:47.105704 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:26:47.106006 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:26:47.109141 systemd-modules-load[184]: Inserted module 'overlay' Jan 13 21:26:47.116223 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:26:47.143841 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:26:47.160245 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:26:47.164808 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:26:47.167615 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 13 21:26:47.173838 kernel: Bridge firewalling registered Jan 13 21:26:47.169308 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:26:47.174306 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:47.179234 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:26:47.189880 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:47.200872 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:26:47.202225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:26:47.221388 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:47.229841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:26:47.232900 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:26:47.240966 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:47.253914 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:26:47.278650 systemd-resolved[210]: Positive Trust Anchors: Jan 13 21:26:47.279148 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:26:47.279369 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:26:47.297917 dracut-cmdline[217]: dracut-dracut-053 Jan 13 21:26:47.297917 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:26:47.285859 systemd-resolved[210]: Defaulting to hostname 'linux'. Jan 13 21:26:47.288930 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:26:47.301900 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:26:47.387707 kernel: SCSI subsystem initialized Jan 13 21:26:47.397716 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:26:47.409698 kernel: iscsi: registered transport (tcp) Jan 13 21:26:47.433228 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:26:47.433311 kernel: QLogic iSCSI HBA Driver Jan 13 21:26:47.487320 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:26:47.504930 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:26:47.533782 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:26:47.533866 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:26:47.533892 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:26:47.577725 kernel: raid6: avx2x4 gen() 18087 MB/s Jan 13 21:26:47.594707 kernel: raid6: avx2x2 gen() 18269 MB/s Jan 13 21:26:47.612114 kernel: raid6: avx2x1 gen() 14027 MB/s Jan 13 21:26:47.612146 kernel: raid6: using algorithm avx2x2 gen() 18269 MB/s Jan 13 21:26:47.630081 kernel: raid6: .... xor() 17770 MB/s, rmw enabled Jan 13 21:26:47.630123 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:26:47.652708 kernel: xor: automatically using best checksumming function avx Jan 13 21:26:47.824709 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:26:47.837737 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:26:47.844926 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:26:47.876613 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 13 21:26:47.883424 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:26:47.892141 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:26:47.923936 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 13 21:26:47.960408 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:26:47.975888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:26:48.055598 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:26:48.070908 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:26:48.105848 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:26:48.117294 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:26:48.121769 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:26:48.126775 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:26:48.135858 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:26:48.170177 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:26:48.185975 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:26:48.202694 kernel: scsi host0: Virtio SCSI HBA Jan 13 21:26:48.208731 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 21:26:48.243842 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:26:48.239400 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:26:48.258249 kernel: AES CTR mode by8 optimization enabled Jan 13 21:26:48.252615 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:48.268983 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:48.276751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:26:48.277007 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:48.281473 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:48.301037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:48.322223 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 21:26:48.339379 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 21:26:48.339646 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 21:26:48.339907 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 21:26:48.340138 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:26:48.340365 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:26:48.340392 kernel: GPT:17805311 != 25165823 Jan 13 21:26:48.340416 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:26:48.340440 kernel: GPT:17805311 != 25165823 Jan 13 21:26:48.340463 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:26:48.340486 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:26:48.340519 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 21:26:48.334114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:48.347031 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:48.380946 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:48.405717 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Jan 13 21:26:48.410863 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (463) Jan 13 21:26:48.410762 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 21:26:48.438115 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 21:26:48.444403 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 21:26:48.444631 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 21:26:48.461320 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:26:48.464858 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:26:48.488739 disk-uuid[548]: Primary Header is updated. Jan 13 21:26:48.488739 disk-uuid[548]: Secondary Entries is updated. Jan 13 21:26:48.488739 disk-uuid[548]: Secondary Header is updated. Jan 13 21:26:48.504709 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:26:48.532688 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:26:48.549721 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:26:49.548903 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:26:49.548987 disk-uuid[549]: The operation has completed successfully. Jan 13 21:26:49.626971 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:26:49.627117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:26:49.656872 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:26:49.688828 sh[566]: Success Jan 13 21:26:49.712732 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:26:49.790124 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:26:49.797562 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:26:49.815088 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:26:49.866440 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:26:49.866526 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:49.866553 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:26:49.882710 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:26:49.882762 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:26:49.914699 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:26:49.996499 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:26:49.997492 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:26:50.003894 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:26:50.080861 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:50.080914 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:50.080942 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:26:50.080967 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:26:50.080991 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:26:50.074015 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:26:50.114959 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:50.109041 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:26:50.132948 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:26:50.202351 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:26:50.220928 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:26:50.311317 systemd-networkd[749]: lo: Link UP Jan 13 21:26:50.311330 systemd-networkd[749]: lo: Gained carrier Jan 13 21:26:50.314340 systemd-networkd[749]: Enumeration completed Jan 13 21:26:50.314824 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:26:50.336424 ignition[680]: Ignition 2.19.0 Jan 13 21:26:50.317420 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:50.336435 ignition[680]: Stage: fetch-offline Jan 13 21:26:50.317426 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:26:50.336479 ignition[680]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:50.319866 systemd-networkd[749]: eth0: Link UP Jan 13 21:26:50.336489 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:50.319873 systemd-networkd[749]: eth0: Gained carrier Jan 13 21:26:50.336740 ignition[680]: parsed url from cmdline: "" Jan 13 21:26:50.319888 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:50.336747 ignition[680]: no config URL provided Jan 13 21:26:50.331752 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.96/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:26:50.336757 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:26:50.339143 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:26:50.336772 ignition[680]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:26:50.339897 systemd[1]: Reached target network.target - Network. Jan 13 21:26:50.336782 ignition[680]: failed to fetch config: resource requires networking Jan 13 21:26:50.370953 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:26:50.337084 ignition[680]: Ignition finished successfully Jan 13 21:26:50.408819 unknown[758]: fetched base config from "system" Jan 13 21:26:50.399492 ignition[758]: Ignition 2.19.0 Jan 13 21:26:50.408833 unknown[758]: fetched base config from "system" Jan 13 21:26:50.399501 ignition[758]: Stage: fetch Jan 13 21:26:50.408845 unknown[758]: fetched user config from "gcp" Jan 13 21:26:50.399750 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:50.411683 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:26:50.399763 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:50.422880 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:26:50.399907 ignition[758]: parsed url from cmdline: "" Jan 13 21:26:50.466217 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:26:50.399914 ignition[758]: no config URL provided Jan 13 21:26:50.485014 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:26:50.399921 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:26:50.534555 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:26:50.399933 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:26:50.543112 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:26:50.399955 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 21:26:50.568911 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:26:50.403912 ignition[758]: GET result: OK Jan 13 21:26:50.576949 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:26:50.403976 ignition[758]: parsing config with SHA512: 779b6bd97229a608169ad2c766cd78ee8e83e6b663994203f97a52b8a51fb3cc97fa9ba67d3ed915d00b36ae08d57ff025d6c1f7d3bdaca661acfa92adc3c51c Jan 13 21:26:50.594945 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:26:50.409811 ignition[758]: fetch: fetch complete Jan 13 21:26:50.611961 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:26:50.409822 ignition[758]: fetch: fetch passed Jan 13 21:26:50.640878 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:26:50.409901 ignition[758]: Ignition finished successfully Jan 13 21:26:50.463745 ignition[764]: Ignition 2.19.0 Jan 13 21:26:50.463754 ignition[764]: Stage: kargs Jan 13 21:26:50.463957 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:50.463969 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:50.465030 ignition[764]: kargs: kargs passed Jan 13 21:26:50.465098 ignition[764]: Ignition finished successfully Jan 13 21:26:50.531959 ignition[772]: Ignition 2.19.0 Jan 13 21:26:50.531968 ignition[772]: Stage: disks Jan 13 21:26:50.532160 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:50.532181 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:50.533364 ignition[772]: disks: disks passed Jan 13 21:26:50.533427 ignition[772]: Ignition finished successfully Jan 13 21:26:50.692350 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:26:50.896400 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:26:50.923816 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:26:51.049709 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:26:51.050128 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:26:51.051000 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:26:51.070945 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:26:51.109708 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 13 21:26:51.127082 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:51.127160 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:51.127187 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:26:51.127354 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:26:51.150865 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:26:51.150956 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:26:51.159159 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:26:51.159248 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:26:51.159294 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:26:51.184966 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:26:51.201133 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:26:51.223909 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:26:51.340516 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:26:51.350807 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:26:51.360820 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:26:51.371856 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:26:51.488248 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:26:51.492868 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:26:51.512851 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:26:51.548266 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:51.548556 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:26:51.588258 ignition[905]: INFO : Ignition 2.19.0 Jan 13 21:26:51.588810 ignition[905]: INFO : Stage: mount Jan 13 21:26:51.588599 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:26:51.627852 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:51.627852 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:51.627852 ignition[905]: INFO : mount: mount passed Jan 13 21:26:51.627852 ignition[905]: INFO : Ignition finished successfully Jan 13 21:26:51.603118 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:26:51.624811 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:26:51.664935 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:26:51.714684 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Jan 13 21:26:51.732573 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:51.732649 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:51.732691 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:26:51.754140 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:26:51.754207 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:26:51.757056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:26:51.802682 ignition[934]: INFO : Ignition 2.19.0 Jan 13 21:26:51.810798 ignition[934]: INFO : Stage: files Jan 13 21:26:51.810798 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:51.810798 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:51.810798 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:26:51.810798 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:26:51.810798 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:26:51.875776 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:26:51.875776 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:26:51.875776 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:26:51.875776 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:26:51.875776 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:26:51.815798 unknown[934]: wrote ssh authorized keys file for user: core Jan 13 21:26:51.953797 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:26:52.109380 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 21:26:52.263936 systemd-networkd[749]: eth0: Gained IPv6LL Jan 13 21:26:52.426316 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:26:52.775823 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:26:52.775823 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:26:52.813891 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:26:52.813891 ignition[934]: INFO : files: files passed Jan 13 21:26:52.813891 ignition[934]: INFO : Ignition finished successfully Jan 13 21:26:52.781463 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:26:52.799893 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:26:52.819875 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:26:52.866153 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:26:53.027996 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:53.027996 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:52.866275 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:26:53.093828 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:52.908704 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:26:52.915072 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:26:52.944892 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:26:53.025375 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:26:53.025502 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:26:53.039064 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:26:53.062943 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:26:53.083940 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:26:53.088884 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:26:53.151207 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:26:53.170911 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:26:53.205589 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:26:53.220003 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:26:53.241046 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:26:53.261023 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:26:53.261233 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:26:53.294089 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:26:53.315075 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:26:53.333029 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:26:53.352992 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:26:53.372032 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:26:53.391986 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:26:53.412999 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:26:53.434073 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:26:53.452016 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:26:53.472070 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:26:53.489953 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:26:53.490164 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:26:53.520073 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:26:53.541036 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:26:53.561918 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:26:53.562122 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:26:53.582969 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:26:53.689914 ignition[986]: INFO : Ignition 2.19.0 Jan 13 21:26:53.689914 ignition[986]: INFO : Stage: umount Jan 13 21:26:53.689914 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:53.689914 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:53.689914 ignition[986]: INFO : umount: umount passed Jan 13 21:26:53.689914 ignition[986]: INFO : Ignition finished successfully Jan 13 21:26:53.583170 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:26:53.608066 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:26:53.608295 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:26:53.629052 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:26:53.629247 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:26:53.655935 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:26:53.701974 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:26:53.704991 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:26:53.705204 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:26:53.719188 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:26:53.719362 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:26:53.815489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:26:53.816531 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:26:53.816643 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:26:53.831397 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:26:53.831507 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:26:53.852910 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:26:53.853062 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:26:53.860909 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:26:53.860965 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:26:53.877032 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:26:53.877093 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:26:53.895021 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:26:53.895084 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:26:53.912046 systemd[1]: Stopped target network.target - Network. Jan 13 21:26:53.929011 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:26:53.929098 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:26:53.944056 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:26:53.976902 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:26:53.981730 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:26:53.995896 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:26:54.003974 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:26:54.019006 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:26:54.019066 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:26:54.054029 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:26:54.054099 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:26:54.062072 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:26:54.062152 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:26:54.089027 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:26:54.089127 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:26:54.097013 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:26:54.097088 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:26:54.114237 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:26:54.118732 systemd-networkd[749]: eth0: DHCPv6 lease lost Jan 13 21:26:54.141013 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:26:54.160268 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:26:54.160401 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:26:54.181889 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:26:54.182057 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:26:54.201513 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:26:54.201568 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:26:54.213824 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:26:54.234778 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:26:54.234893 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:26:54.245893 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:26:54.245970 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:54.267880 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:26:54.267963 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:26:54.710815 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 13 21:26:54.285826 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:26:54.285918 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:26:54.304996 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:26:54.324273 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:26:54.324430 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:26:54.348114 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:26:54.348230 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:26:54.370858 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:26:54.370933 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:26:54.388823 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:26:54.388912 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:26:54.416764 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:26:54.416870 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:26:54.446813 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:26:54.447022 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:54.482892 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:26:54.500769 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:26:54.500877 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:26:54.511878 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:26:54.511954 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:26:54.522852 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:26:54.522933 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:26:54.541828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:26:54.541913 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:54.563317 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:26:54.563445 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:26:54.581251 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:26:54.581361 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:26:54.603002 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:26:54.623885 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:26:54.660764 systemd[1]: Switching root. Jan 13 21:26:55.018791 systemd-journald[183]: Journal stopped Jan 13 21:26:47.089993 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:26:47.090038 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:26:47.090056 kernel: BIOS-provided physical RAM map: Jan 13 21:26:47.090069 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 21:26:47.090083 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 21:26:47.090096 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 21:26:47.090112 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 21:26:47.090132 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 21:26:47.090147 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 13 21:26:47.090162 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 13 21:26:47.090178 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 13 21:26:47.090193 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 13 21:26:47.090207 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 21:26:47.090223 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 21:26:47.090245 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 21:26:47.090262 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 21:26:47.090278 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 21:26:47.090295 kernel: NX (Execute Disable) protection: active Jan 13 21:26:47.090311 kernel: APIC: Static calls initialized Jan 13 21:26:47.090328 kernel: efi: EFI v2.7 by EDK II Jan 13 21:26:47.090345 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 13 21:26:47.090362 kernel: SMBIOS 2.4 present. Jan 13 21:26:47.090379 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 21:26:47.090395 kernel: Hypervisor detected: KVM Jan 13 21:26:47.090415 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:26:47.090431 kernel: kvm-clock: using sched offset of 11955698206 cycles Jan 13 21:26:47.090448 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:26:47.090473 kernel: tsc: Detected 2299.998 MHz processor Jan 13 21:26:47.090490 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:26:47.090507 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:26:47.090524 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 21:26:47.090542 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 21:26:47.090559 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:26:47.090580 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 21:26:47.090597 kernel: Using GB pages for direct mapping Jan 13 21:26:47.090613 kernel: Secure boot disabled Jan 13 21:26:47.090631 kernel: ACPI: Early table checksum verification disabled Jan 13 21:26:47.090647 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 21:26:47.090676 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 21:26:47.090706 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 21:26:47.090731 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 21:26:47.090752 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 21:26:47.090771 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 21:26:47.090789 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 21:26:47.090807 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 21:26:47.090826 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 21:26:47.090844 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 21:26:47.090865 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 21:26:47.090884 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 21:26:47.090902 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 21:26:47.090920 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 21:26:47.090938 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 21:26:47.090956 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 21:26:47.090975 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 21:26:47.090992 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 21:26:47.091010 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 21:26:47.091032 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 21:26:47.091050 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:26:47.091068 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:26:47.091086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:26:47.091104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 21:26:47.091122 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 21:26:47.091141 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 21:26:47.091159 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 21:26:47.091177 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 13 21:26:47.091199 kernel: Zone ranges: Jan 13 21:26:47.091217 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:26:47.091235 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:26:47.091253 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:26:47.091271 kernel: Movable zone start for each node Jan 13 21:26:47.091289 kernel: Early memory node ranges Jan 13 21:26:47.091307 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 21:26:47.091325 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 21:26:47.091342 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 13 21:26:47.091364 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 21:26:47.091382 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:26:47.091400 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 21:26:47.091417 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:26:47.091434 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 21:26:47.091468 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 21:26:47.091486 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 21:26:47.091503 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 21:26:47.091520 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:26:47.091537 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:26:47.091557 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:26:47.091576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:26:47.091594 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:26:47.091612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:26:47.091630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:26:47.091649 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:26:47.091690 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:26:47.091707 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:26:47.091726 kernel: Booting paravirtualized kernel on KVM Jan 13 21:26:47.091742 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:26:47.091757 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:26:47.091772 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:26:47.091787 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:26:47.091801 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:26:47.091815 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:26:47.091831 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:26:47.091849 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:26:47.091871 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:26:47.091888 kernel: random: crng init done Jan 13 21:26:47.091904 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 21:26:47.091921 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:26:47.091936 kernel: Fallback order for Node 0: 0 Jan 13 21:26:47.091953 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 13 21:26:47.091970 kernel: Policy zone: Normal Jan 13 21:26:47.091987 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:26:47.092003 kernel: software IO TLB: area num 2. Jan 13 21:26:47.092044 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Jan 13 21:26:47.092062 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:26:47.092079 kernel: Kernel/User page tables isolation: enabled Jan 13 21:26:47.092096 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:26:47.092114 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:26:47.092131 kernel: Dynamic Preempt: voluntary Jan 13 21:26:47.092149 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:26:47.092167 kernel: rcu: RCU event tracing is enabled. Jan 13 21:26:47.092203 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:26:47.092221 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:26:47.092239 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:26:47.092261 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:26:47.092278 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:26:47.092297 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:26:47.092315 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:26:47.092333 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:26:47.092352 kernel: Console: colour dummy device 80x25 Jan 13 21:26:47.092374 kernel: printk: console [ttyS0] enabled Jan 13 21:26:47.092393 kernel: ACPI: Core revision 20230628 Jan 13 21:26:47.092411 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:26:47.092429 kernel: x2apic enabled Jan 13 21:26:47.092447 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:26:47.092474 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 21:26:47.092493 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:26:47.092512 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 21:26:47.092534 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 21:26:47.092552 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 21:26:47.092571 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:26:47.092589 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:26:47.092607 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:26:47.092625 kernel: Spectre V2 : Mitigation: IBRS Jan 13 21:26:47.092643 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:26:47.092675 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:26:47.092694 kernel: RETBleed: Mitigation: IBRS Jan 13 21:26:47.092717 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:26:47.092735 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 21:26:47.092754 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:26:47.092772 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 21:26:47.092790 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:26:47.092808 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:26:47.092826 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:26:47.092844 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:26:47.092862 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:26:47.092884 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 21:26:47.092903 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:26:47.092920 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:26:47.092939 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:26:47.092957 kernel: landlock: Up and running. Jan 13 21:26:47.092976 kernel: SELinux: Initializing. Jan 13 21:26:47.092994 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:26:47.093012 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:26:47.093031 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 21:26:47.093053 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:26:47.093072 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:26:47.093090 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:26:47.093109 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 21:26:47.093127 kernel: signal: max sigframe size: 1776 Jan 13 21:26:47.093145 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:26:47.093164 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:26:47.093181 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:26:47.093199 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:26:47.093221 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:26:47.093239 kernel: .... node #0, CPUs: #1 Jan 13 21:26:47.093258 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:26:47.093286 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:26:47.093304 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:26:47.093319 kernel: smpboot: Max logical packages: 1 Jan 13 21:26:47.093336 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 21:26:47.093355 kernel: devtmpfs: initialized Jan 13 21:26:47.093379 kernel: x86/mm: Memory block size: 128MB Jan 13 21:26:47.093398 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 21:26:47.093417 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:26:47.093436 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:26:47.093463 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:26:47.093482 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:26:47.093501 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:26:47.093520 kernel: audit: type=2000 audit(1736803605.740:1): state=initialized audit_enabled=0 res=1 Jan 13 21:26:47.093539 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:26:47.093562 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:26:47.093580 kernel: cpuidle: using governor menu Jan 13 21:26:47.093599 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:26:47.093617 kernel: dca service started, version 1.12.1 Jan 13 21:26:47.093635 kernel: PCI: Using configuration type 1 for base access Jan 13 21:26:47.093653 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:26:47.093704 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:26:47.093724 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:26:47.093742 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:26:47.093766 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:26:47.093784 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:26:47.093803 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:26:47.093821 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:26:47.093839 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:26:47.093857 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:26:47.093875 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:26:47.093894 kernel: ACPI: Interpreter enabled Jan 13 21:26:47.093912 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:26:47.093935 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:26:47.093954 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:26:47.093972 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 21:26:47.093991 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:26:47.094010 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:26:47.094261 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:26:47.094467 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:26:47.094692 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:26:47.094718 kernel: PCI host bridge to bus 0000:00 Jan 13 21:26:47.094902 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:26:47.095071 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:26:47.095238 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:26:47.095400 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 21:26:47.095573 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:26:47.095804 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:26:47.096019 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 21:26:47.096212 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:26:47.096405 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:26:47.096607 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 21:26:47.096834 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 21:26:47.097032 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 21:26:47.097227 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:26:47.097416 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 21:26:47.097615 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 21:26:47.097843 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:26:47.098029 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 21:26:47.098211 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 21:26:47.098242 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:26:47.098262 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:26:47.098280 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:26:47.098297 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:26:47.098317 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:26:47.098337 kernel: iommu: Default domain type: Translated Jan 13 21:26:47.098356 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:26:47.098375 kernel: efivars: Registered efivars operations Jan 13 21:26:47.098395 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:26:47.098420 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:26:47.098439 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 21:26:47.098466 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 21:26:47.098485 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 21:26:47.098504 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 21:26:47.098524 kernel: vgaarb: loaded Jan 13 21:26:47.098544 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:26:47.098563 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:26:47.098583 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:26:47.098606 kernel: pnp: PnP ACPI init Jan 13 21:26:47.098626 kernel: pnp: PnP ACPI: found 7 devices Jan 13 21:26:47.098646 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:26:47.098690 kernel: NET: Registered PF_INET protocol family Jan 13 21:26:47.098709 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:26:47.098729 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 21:26:47.098749 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:26:47.098769 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:26:47.098789 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 21:26:47.098813 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 21:26:47.098832 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:26:47.098852 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:26:47.098872 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:26:47.098892 kernel: NET: Registered PF_XDP protocol family Jan 13 21:26:47.099074 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:26:47.099240 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:26:47.099401 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:26:47.099581 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 21:26:47.099798 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:26:47.099825 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:26:47.099840 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:26:47.099863 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 21:26:47.099887 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:26:47.099910 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:26:47.099930 kernel: clocksource: Switched to clocksource tsc Jan 13 21:26:47.099955 kernel: Initialise system trusted keyrings Jan 13 21:26:47.099974 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 21:26:47.099993 kernel: Key type asymmetric registered Jan 13 21:26:47.100012 kernel: Asymmetric key parser 'x509' registered Jan 13 21:26:47.100030 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:26:47.100050 kernel: io scheduler mq-deadline registered Jan 13 21:26:47.100069 kernel: io scheduler kyber registered Jan 13 21:26:47.100087 kernel: io scheduler bfq registered Jan 13 21:26:47.100107 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:26:47.100130 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:26:47.100326 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 21:26:47.100350 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 21:26:47.100542 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 21:26:47.100566 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:26:47.100773 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 21:26:47.100797 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:26:47.100817 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:26:47.100836 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 21:26:47.100861 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 21:26:47.100880 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 21:26:47.101063 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 21:26:47.101088 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:26:47.101107 kernel: i8042: Warning: Keylock active Jan 13 21:26:47.101126 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:26:47.101145 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:26:47.101328 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:26:47.101510 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:26:47.101720 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:26:46 UTC (1736803606) Jan 13 21:26:47.101889 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:26:47.101912 kernel: intel_pstate: CPU model not supported Jan 13 21:26:47.101931 kernel: pstore: Using crash dump compression: deflate Jan 13 21:26:47.101950 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:26:47.101969 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:26:47.101988 kernel: Segment Routing with IPv6 Jan 13 21:26:47.102012 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:26:47.102031 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:26:47.102049 kernel: Key type dns_resolver registered Jan 13 21:26:47.102068 kernel: IPI shorthand broadcast: enabled Jan 13 21:26:47.102087 kernel: sched_clock: Marking stable (842004563, 146284346)->(1012873386, -24584477) Jan 13 21:26:47.102106 kernel: registered taskstats version 1 Jan 13 21:26:47.102125 kernel: Loading compiled-in X.509 certificates Jan 13 21:26:47.102144 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:26:47.102163 kernel: Key type .fscrypt registered Jan 13 21:26:47.102186 kernel: Key type fscrypt-provisioning registered Jan 13 21:26:47.102205 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:26:47.102224 kernel: ima: No architecture policies found Jan 13 21:26:47.102243 kernel: clk: Disabling unused clocks Jan 13 21:26:47.102261 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:26:47.102280 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:26:47.102299 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:26:47.102318 kernel: Run /init as init process Jan 13 21:26:47.102341 kernel: with arguments: Jan 13 21:26:47.102359 kernel: /init Jan 13 21:26:47.102378 kernel: with environment: Jan 13 21:26:47.102396 kernel: HOME=/ Jan 13 21:26:47.102414 kernel: TERM=linux Jan 13 21:26:47.102433 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:26:47.102458 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:26:47.102481 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:26:47.102508 systemd[1]: Detected virtualization google. Jan 13 21:26:47.102528 systemd[1]: Detected architecture x86-64. Jan 13 21:26:47.102548 systemd[1]: Running in initrd. Jan 13 21:26:47.102567 systemd[1]: No hostname configured, using default hostname. Jan 13 21:26:47.102586 systemd[1]: Hostname set to . Jan 13 21:26:47.102607 systemd[1]: Initializing machine ID from random generator. Jan 13 21:26:47.102627 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:26:47.102647 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:26:47.102694 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:26:47.102716 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:26:47.102736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:26:47.102756 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:26:47.102776 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:26:47.102799 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:26:47.102819 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:26:47.102845 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:26:47.102865 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:26:47.102905 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:26:47.102930 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:26:47.102951 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:26:47.102972 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:26:47.102996 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:26:47.103017 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:26:47.103038 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:26:47.103059 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:26:47.103080 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:26:47.103101 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:26:47.103122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:26:47.103142 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:26:47.103163 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:26:47.103188 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:26:47.103209 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:26:47.103229 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:26:47.103250 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:26:47.103270 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:26:47.103291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:47.103312 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:26:47.103363 systemd-journald[183]: Collecting audit messages is disabled. Jan 13 21:26:47.103410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:26:47.103431 systemd-journald[183]: Journal started Jan 13 21:26:47.103483 systemd-journald[183]: Runtime Journal (/run/log/journal/c3007c4d21ff41d6921488bba7716eac) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:26:47.105704 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:26:47.106006 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:26:47.109141 systemd-modules-load[184]: Inserted module 'overlay' Jan 13 21:26:47.116223 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:26:47.143841 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:26:47.160245 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:26:47.164808 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:26:47.167615 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 13 21:26:47.173838 kernel: Bridge firewalling registered Jan 13 21:26:47.169308 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:26:47.174306 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:47.179234 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:26:47.189880 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:47.200872 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:26:47.202225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:26:47.221388 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:47.229841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:26:47.232900 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:26:47.240966 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:47.253914 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:26:47.278650 systemd-resolved[210]: Positive Trust Anchors: Jan 13 21:26:47.279148 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:26:47.279369 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:26:47.297917 dracut-cmdline[217]: dracut-dracut-053 Jan 13 21:26:47.297917 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:26:47.285859 systemd-resolved[210]: Defaulting to hostname 'linux'. Jan 13 21:26:47.288930 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:26:47.301900 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:26:47.387707 kernel: SCSI subsystem initialized Jan 13 21:26:47.397716 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:26:47.409698 kernel: iscsi: registered transport (tcp) Jan 13 21:26:47.433228 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:26:47.433311 kernel: QLogic iSCSI HBA Driver Jan 13 21:26:47.487320 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:26:47.504930 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:26:47.533782 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:26:47.533866 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:26:47.533892 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:26:47.577725 kernel: raid6: avx2x4 gen() 18087 MB/s Jan 13 21:26:47.594707 kernel: raid6: avx2x2 gen() 18269 MB/s Jan 13 21:26:47.612114 kernel: raid6: avx2x1 gen() 14027 MB/s Jan 13 21:26:47.612146 kernel: raid6: using algorithm avx2x2 gen() 18269 MB/s Jan 13 21:26:47.630081 kernel: raid6: .... xor() 17770 MB/s, rmw enabled Jan 13 21:26:47.630123 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:26:47.652708 kernel: xor: automatically using best checksumming function avx Jan 13 21:26:47.824709 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:26:47.837737 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:26:47.844926 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:26:47.876613 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 13 21:26:47.883424 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:26:47.892141 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:26:47.923936 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 13 21:26:47.960408 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:26:47.975888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:26:48.055598 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:26:48.070908 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:26:48.105848 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:26:48.117294 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:26:48.121769 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:26:48.126775 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:26:48.135858 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:26:48.170177 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:26:48.185975 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:26:48.202694 kernel: scsi host0: Virtio SCSI HBA Jan 13 21:26:48.208731 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 21:26:48.243842 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:26:48.239400 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:26:48.258249 kernel: AES CTR mode by8 optimization enabled Jan 13 21:26:48.252615 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:48.268983 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:48.276751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:26:48.277007 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:48.281473 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:48.301037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:48.322223 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 21:26:48.339379 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 21:26:48.339646 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 21:26:48.339907 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 21:26:48.340138 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:26:48.340365 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:26:48.340392 kernel: GPT:17805311 != 25165823 Jan 13 21:26:48.340416 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:26:48.340440 kernel: GPT:17805311 != 25165823 Jan 13 21:26:48.340463 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:26:48.340486 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:26:48.340519 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 21:26:48.334114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:48.347031 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:48.380946 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:48.405717 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Jan 13 21:26:48.410863 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (463) Jan 13 21:26:48.410762 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 21:26:48.438115 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 21:26:48.444403 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 21:26:48.444631 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 21:26:48.461320 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:26:48.464858 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:26:48.488739 disk-uuid[548]: Primary Header is updated. Jan 13 21:26:48.488739 disk-uuid[548]: Secondary Entries is updated. Jan 13 21:26:48.488739 disk-uuid[548]: Secondary Header is updated. Jan 13 21:26:48.504709 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:26:48.532688 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:26:48.549721 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:26:49.548903 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:26:49.548987 disk-uuid[549]: The operation has completed successfully. Jan 13 21:26:49.626971 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:26:49.627117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:26:49.656872 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:26:49.688828 sh[566]: Success Jan 13 21:26:49.712732 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:26:49.790124 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:26:49.797562 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:26:49.815088 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:26:49.866440 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:26:49.866526 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:49.866553 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:26:49.882710 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:26:49.882762 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:26:49.914699 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:26:49.996499 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:26:49.997492 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:26:50.003894 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:26:50.080861 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:50.080914 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:50.080942 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:26:50.080967 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:26:50.080991 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:26:50.074015 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:26:50.114959 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:50.109041 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:26:50.132948 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:26:50.202351 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:26:50.220928 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:26:50.311317 systemd-networkd[749]: lo: Link UP Jan 13 21:26:50.311330 systemd-networkd[749]: lo: Gained carrier Jan 13 21:26:50.314340 systemd-networkd[749]: Enumeration completed Jan 13 21:26:50.314824 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:26:50.336424 ignition[680]: Ignition 2.19.0 Jan 13 21:26:50.317420 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:50.336435 ignition[680]: Stage: fetch-offline Jan 13 21:26:50.317426 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:26:50.336479 ignition[680]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:50.319866 systemd-networkd[749]: eth0: Link UP Jan 13 21:26:50.336489 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:50.319873 systemd-networkd[749]: eth0: Gained carrier Jan 13 21:26:50.336740 ignition[680]: parsed url from cmdline: "" Jan 13 21:26:50.319888 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:50.336747 ignition[680]: no config URL provided Jan 13 21:26:50.331752 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.96/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:26:50.336757 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:26:50.339143 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:26:50.336772 ignition[680]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:26:50.339897 systemd[1]: Reached target network.target - Network. Jan 13 21:26:50.336782 ignition[680]: failed to fetch config: resource requires networking Jan 13 21:26:50.370953 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:26:50.337084 ignition[680]: Ignition finished successfully Jan 13 21:26:50.408819 unknown[758]: fetched base config from "system" Jan 13 21:26:50.399492 ignition[758]: Ignition 2.19.0 Jan 13 21:26:50.408833 unknown[758]: fetched base config from "system" Jan 13 21:26:50.399501 ignition[758]: Stage: fetch Jan 13 21:26:50.408845 unknown[758]: fetched user config from "gcp" Jan 13 21:26:50.399750 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:50.411683 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:26:50.399763 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:50.422880 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:26:50.399907 ignition[758]: parsed url from cmdline: "" Jan 13 21:26:50.466217 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:26:50.399914 ignition[758]: no config URL provided Jan 13 21:26:50.485014 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:26:50.399921 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:26:50.534555 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:26:50.399933 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:26:50.543112 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:26:50.399955 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 21:26:50.568911 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:26:50.403912 ignition[758]: GET result: OK Jan 13 21:26:50.576949 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:26:50.403976 ignition[758]: parsing config with SHA512: 779b6bd97229a608169ad2c766cd78ee8e83e6b663994203f97a52b8a51fb3cc97fa9ba67d3ed915d00b36ae08d57ff025d6c1f7d3bdaca661acfa92adc3c51c Jan 13 21:26:50.594945 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:26:50.409811 ignition[758]: fetch: fetch complete Jan 13 21:26:50.611961 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:26:50.409822 ignition[758]: fetch: fetch passed Jan 13 21:26:50.640878 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:26:50.409901 ignition[758]: Ignition finished successfully Jan 13 21:26:50.463745 ignition[764]: Ignition 2.19.0 Jan 13 21:26:50.463754 ignition[764]: Stage: kargs Jan 13 21:26:50.463957 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:50.463969 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:50.465030 ignition[764]: kargs: kargs passed Jan 13 21:26:50.465098 ignition[764]: Ignition finished successfully Jan 13 21:26:50.531959 ignition[772]: Ignition 2.19.0 Jan 13 21:26:50.531968 ignition[772]: Stage: disks Jan 13 21:26:50.532160 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:50.532181 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:50.533364 ignition[772]: disks: disks passed Jan 13 21:26:50.533427 ignition[772]: Ignition finished successfully Jan 13 21:26:50.692350 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:26:50.896400 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:26:50.923816 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:26:51.049709 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:26:51.050128 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:26:51.051000 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:26:51.070945 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:26:51.109708 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 13 21:26:51.127082 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:51.127160 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:51.127187 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:26:51.127354 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:26:51.150865 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:26:51.150956 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:26:51.159159 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:26:51.159248 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:26:51.159294 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:26:51.184966 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:26:51.201133 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:26:51.223909 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:26:51.340516 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:26:51.350807 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:26:51.360820 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:26:51.371856 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:26:51.488248 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:26:51.492868 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:26:51.512851 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:26:51.548266 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:51.548556 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:26:51.588258 ignition[905]: INFO : Ignition 2.19.0 Jan 13 21:26:51.588810 ignition[905]: INFO : Stage: mount Jan 13 21:26:51.588599 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:26:51.627852 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:51.627852 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:51.627852 ignition[905]: INFO : mount: mount passed Jan 13 21:26:51.627852 ignition[905]: INFO : Ignition finished successfully Jan 13 21:26:51.603118 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:26:51.624811 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:26:51.664935 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:26:51.714684 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Jan 13 21:26:51.732573 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:26:51.732649 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:51.732691 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:26:51.754140 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:26:51.754207 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:26:51.757056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:26:51.802682 ignition[934]: INFO : Ignition 2.19.0 Jan 13 21:26:51.810798 ignition[934]: INFO : Stage: files Jan 13 21:26:51.810798 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:51.810798 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:51.810798 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:26:51.810798 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:26:51.810798 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:26:51.875776 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:26:51.875776 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:26:51.875776 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:26:51.875776 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:26:51.875776 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:26:51.815798 unknown[934]: wrote ssh authorized keys file for user: core Jan 13 21:26:51.953797 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:26:52.109380 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:26:52.125831 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 21:26:52.263936 systemd-networkd[749]: eth0: Gained IPv6LL Jan 13 21:26:52.426316 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:26:52.775823 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:26:52.775823 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:26:52.813891 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:26:52.813891 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:26:52.813891 ignition[934]: INFO : files: files passed Jan 13 21:26:52.813891 ignition[934]: INFO : Ignition finished successfully Jan 13 21:26:52.781463 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:26:52.799893 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:26:52.819875 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:26:52.866153 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:26:53.027996 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:53.027996 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:52.866275 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:26:53.093828 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:52.908704 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:26:52.915072 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:26:52.944892 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:26:53.025375 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:26:53.025502 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:26:53.039064 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:26:53.062943 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:26:53.083940 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:26:53.088884 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:26:53.151207 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:26:53.170911 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:26:53.205589 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:26:53.220003 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:26:53.241046 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:26:53.261023 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:26:53.261233 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:26:53.294089 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:26:53.315075 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:26:53.333029 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:26:53.352992 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:26:53.372032 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:26:53.391986 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:26:53.412999 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:26:53.434073 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:26:53.452016 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:26:53.472070 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:26:53.489953 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:26:53.490164 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:26:53.520073 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:26:53.541036 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:26:53.561918 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:26:53.562122 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:26:53.582969 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:26:53.689914 ignition[986]: INFO : Ignition 2.19.0 Jan 13 21:26:53.689914 ignition[986]: INFO : Stage: umount Jan 13 21:26:53.689914 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:53.689914 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:26:53.689914 ignition[986]: INFO : umount: umount passed Jan 13 21:26:53.689914 ignition[986]: INFO : Ignition finished successfully Jan 13 21:26:53.583170 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:26:53.608066 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:26:53.608295 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:26:53.629052 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:26:53.629247 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:26:53.655935 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:26:53.701974 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:26:53.704991 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:26:53.705204 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:26:53.719188 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:26:53.719362 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:26:53.815489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:26:53.816531 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:26:53.816643 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:26:53.831397 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:26:53.831507 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:26:53.852910 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:26:53.853062 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:26:53.860909 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:26:53.860965 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:26:53.877032 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:26:53.877093 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:26:53.895021 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:26:53.895084 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:26:53.912046 systemd[1]: Stopped target network.target - Network. Jan 13 21:26:53.929011 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:26:53.929098 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:26:53.944056 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:26:53.976902 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:26:53.981730 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:26:53.995896 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:26:54.003974 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:26:54.019006 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:26:54.019066 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:26:54.054029 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:26:54.054099 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:26:54.062072 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:26:54.062152 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:26:54.089027 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:26:54.089127 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:26:54.097013 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:26:54.097088 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:26:54.114237 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:26:54.118732 systemd-networkd[749]: eth0: DHCPv6 lease lost Jan 13 21:26:54.141013 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:26:54.160268 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:26:54.160401 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:26:54.181889 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:26:54.182057 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:26:54.201513 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:26:54.201568 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:26:54.213824 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:26:54.234778 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:26:54.234893 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:26:54.245893 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:26:54.245970 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:54.267880 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:26:54.267963 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:26:54.710815 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 13 21:26:54.285826 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:26:54.285918 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:26:54.304996 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:26:54.324273 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:26:54.324430 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:26:54.348114 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:26:54.348230 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:26:54.370858 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:26:54.370933 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:26:54.388823 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:26:54.388912 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:26:54.416764 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:26:54.416870 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:26:54.446813 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:26:54.447022 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:54.482892 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:26:54.500769 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:26:54.500877 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:26:54.511878 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:26:54.511954 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:26:54.522852 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:26:54.522933 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:26:54.541828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:26:54.541913 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:54.563317 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:26:54.563445 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:26:54.581251 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:26:54.581361 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:26:54.603002 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:26:54.623885 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:26:54.660764 systemd[1]: Switching root. Jan 13 21:26:55.018791 systemd-journald[183]: Journal stopped Jan 13 21:26:57.333673 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:26:57.333722 kernel: SELinux: policy capability open_perms=1 Jan 13 21:26:57.333747 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:26:57.333765 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:26:57.333782 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:26:57.333807 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:26:57.333828 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:26:57.333850 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:26:57.333868 kernel: audit: type=1403 audit(1736803615.296:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:26:57.333890 systemd[1]: Successfully loaded SELinux policy in 80.141ms. Jan 13 21:26:57.333913 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.825ms. Jan 13 21:26:57.333935 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:26:57.333955 systemd[1]: Detected virtualization google. Jan 13 21:26:57.333974 systemd[1]: Detected architecture x86-64. Jan 13 21:26:57.333999 systemd[1]: Detected first boot. Jan 13 21:26:57.334022 systemd[1]: Initializing machine ID from random generator. Jan 13 21:26:57.334043 zram_generator::config[1028]: No configuration found. Jan 13 21:26:57.334065 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:26:57.334086 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:26:57.334111 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:26:57.334131 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:26:57.334154 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:26:57.334174 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:26:57.334196 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:26:57.334219 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:26:57.334240 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:26:57.334265 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:26:57.334287 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:26:57.334308 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:26:57.334330 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:26:57.334352 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:26:57.334373 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:26:57.334394 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:26:57.334415 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:26:57.334441 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:26:57.334462 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:26:57.334483 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:26:57.334505 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:26:57.334526 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:26:57.334547 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:26:57.334575 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:26:57.334597 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:26:57.334620 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:26:57.334645 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:26:57.334680 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:26:57.334703 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:26:57.334725 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:26:57.334748 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:26:57.334769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:26:57.334796 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:26:57.334824 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:26:57.334847 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:26:57.334869 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:26:57.334892 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:26:57.334914 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:57.334941 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:26:57.334963 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:26:57.334985 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:26:57.335008 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:26:57.335031 systemd[1]: Reached target machines.target - Containers. Jan 13 21:26:57.335053 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:26:57.335076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:26:57.335098 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:26:57.335124 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:26:57.335147 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:26:57.335171 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:26:57.335194 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:26:57.335217 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:26:57.335238 kernel: fuse: init (API version 7.39) Jan 13 21:26:57.335259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:26:57.335282 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:26:57.335308 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:26:57.335330 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:26:57.335353 kernel: ACPI: bus type drm_connector registered Jan 13 21:26:57.335373 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:26:57.335396 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:26:57.335417 kernel: loop: module loaded Jan 13 21:26:57.335438 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:26:57.335461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:26:57.335483 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:26:57.335536 systemd-journald[1115]: Collecting audit messages is disabled. Jan 13 21:26:57.335582 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:26:57.335604 systemd-journald[1115]: Journal started Jan 13 21:26:57.335652 systemd-journald[1115]: Runtime Journal (/run/log/journal/18cb9604e1e44a14baad6b82c3b4c757) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:26:56.136356 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:26:56.160213 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 21:26:56.160810 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:26:57.380290 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:26:57.380384 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:26:57.380705 systemd[1]: Stopped verity-setup.service. Jan 13 21:26:57.417726 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:57.427719 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:26:57.439193 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:26:57.449075 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:26:57.460082 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:26:57.470055 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:26:57.480030 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:26:57.490046 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:26:57.500263 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:26:57.512212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:26:57.525157 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:26:57.525387 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:26:57.537118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:26:57.537350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:26:57.549120 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:26:57.549351 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:26:57.559129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:26:57.559356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:26:57.571103 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:26:57.571329 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:26:57.581096 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:26:57.581311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:26:57.591104 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:26:57.601089 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:26:57.612085 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:26:57.623077 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:26:57.647509 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:26:57.663809 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:26:57.684140 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:26:57.693827 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:26:57.694054 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:26:57.705006 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:26:57.720903 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:26:57.738773 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:26:57.748995 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:26:57.756150 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:26:57.772435 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:26:57.784884 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:26:57.795302 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:26:57.805862 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:26:57.819285 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:26:57.828745 systemd-journald[1115]: Time spent on flushing to /var/log/journal/18cb9604e1e44a14baad6b82c3b4c757 is 86.590ms for 929 entries. Jan 13 21:26:57.828745 systemd-journald[1115]: System Journal (/var/log/journal/18cb9604e1e44a14baad6b82c3b4c757) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:26:57.948380 systemd-journald[1115]: Received client request to flush runtime journal. Jan 13 21:26:57.948509 kernel: loop0: detected capacity change from 0 to 205544 Jan 13 21:26:57.848986 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:26:57.871024 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:26:57.890896 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:26:57.906029 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:26:57.917076 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:26:57.928176 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:26:57.940267 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:26:57.959280 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:26:57.971599 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:57.992107 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:26:58.011960 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:26:58.032856 udevadm[1148]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:26:58.051443 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:26:58.063845 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:26:58.070284 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:26:58.071387 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jan 13 21:26:58.071422 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jan 13 21:26:58.089917 kernel: loop1: detected capacity change from 0 to 54824 Jan 13 21:26:58.089799 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:26:58.112409 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:26:58.166701 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 21:26:58.221092 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:26:58.242950 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:26:58.270715 kernel: loop3: detected capacity change from 0 to 140768 Jan 13 21:26:58.306591 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 13 21:26:58.307116 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 13 21:26:58.319231 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:26:58.364703 kernel: loop4: detected capacity change from 0 to 205544 Jan 13 21:26:58.401735 kernel: loop5: detected capacity change from 0 to 54824 Jan 13 21:26:58.439723 kernel: loop6: detected capacity change from 0 to 142488 Jan 13 21:26:58.502713 kernel: loop7: detected capacity change from 0 to 140768 Jan 13 21:26:58.550465 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 13 21:26:58.551386 (sd-merge)[1173]: Merged extensions into '/usr'. Jan 13 21:26:58.562076 systemd[1]: Reloading requested from client PID 1146 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:26:58.562108 systemd[1]: Reloading... Jan 13 21:26:58.707759 zram_generator::config[1195]: No configuration found. Jan 13 21:26:58.937061 ldconfig[1141]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:26:58.974507 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:26:59.085802 systemd[1]: Reloading finished in 522 ms. Jan 13 21:26:59.153393 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:26:59.163387 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:26:59.187974 systemd[1]: Starting ensure-sysext.service... Jan 13 21:26:59.204942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:26:59.222780 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:26:59.222810 systemd[1]: Reloading... Jan 13 21:26:59.252735 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:26:59.253367 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:26:59.255142 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:26:59.255754 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 13 21:26:59.255883 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 13 21:26:59.263623 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:26:59.263643 systemd-tmpfiles[1240]: Skipping /boot Jan 13 21:26:59.292470 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:26:59.292711 systemd-tmpfiles[1240]: Skipping /boot Jan 13 21:26:59.352698 zram_generator::config[1263]: No configuration found. Jan 13 21:26:59.473086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:26:59.537836 systemd[1]: Reloading finished in 314 ms. Jan 13 21:26:59.557430 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:26:59.578325 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:26:59.602073 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:26:59.621798 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:26:59.641027 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:26:59.659820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:26:59.678813 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:26:59.687698 augenrules[1328]: No rules Jan 13 21:26:59.699543 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:26:59.713414 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:26:59.733438 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:59.733902 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:26:59.741022 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:26:59.757206 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 13 21:26:59.762171 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:26:59.784132 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:26:59.793921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:26:59.801152 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:26:59.812762 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:59.815648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:26:59.828333 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:26:59.840906 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:26:59.854889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:26:59.855159 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:26:59.868885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:26:59.869131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:26:59.881871 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:26:59.882127 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:26:59.892610 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:26:59.910793 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:26:59.943387 systemd[1]: Finished ensure-sysext.service. Jan 13 21:26:59.960888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:59.961182 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:26:59.970948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:26:59.988608 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:27:00.009978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:27:00.028926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:27:00.046904 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:27:00.054938 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:27:00.064894 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:27:00.075201 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:27:00.096896 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:27:00.106807 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:27:00.106863 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:27:00.108029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:27:00.109243 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:27:00.121330 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:27:00.121599 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:27:00.132331 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:27:00.132797 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:27:00.133355 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:27:00.134883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:27:00.143733 systemd-resolved[1324]: Positive Trust Anchors: Jan 13 21:27:00.143756 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:27:00.143827 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:27:00.150750 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1346) Jan 13 21:27:00.164647 systemd-resolved[1324]: Defaulting to hostname 'linux'. Jan 13 21:27:00.169797 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:27:00.179079 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:27:00.190740 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:27:00.212320 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:27:00.296482 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:27:00.292887 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:27:00.314705 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:27:00.330719 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 13 21:27:00.438212 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:27:00.438264 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:27:00.438299 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:27:00.330227 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 13 21:27:00.340255 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:27:00.340365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:27:00.382132 systemd-networkd[1379]: lo: Link UP Jan 13 21:27:00.382140 systemd-networkd[1379]: lo: Gained carrier Jan 13 21:27:00.389291 systemd-networkd[1379]: Enumeration completed Jan 13 21:27:00.389434 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:27:00.390120 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:27:00.390127 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:27:00.394054 systemd-networkd[1379]: eth0: Link UP Jan 13 21:27:00.394061 systemd-networkd[1379]: eth0: Gained carrier Jan 13 21:27:00.394087 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:27:00.399441 systemd[1]: Reached target network.target - Network. Jan 13 21:27:00.411781 systemd-networkd[1379]: eth0: DHCPv4 address 10.128.0.96/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:27:00.428917 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:27:00.450692 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 13 21:27:00.455909 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 21:27:00.455710 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 13 21:27:00.469602 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:27:00.489191 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:27:00.511009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:27:00.528020 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:27:00.539425 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:27:00.558360 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:27:00.581146 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:27:00.613412 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:27:00.614654 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:27:00.624261 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:27:00.633228 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:27:00.646159 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:27:00.657983 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:27:00.668964 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:27:00.679858 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:27:00.691010 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:27:00.701011 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:27:00.711822 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:27:00.722801 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:27:00.722870 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:27:00.730789 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:27:00.739506 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:27:00.751462 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:27:00.764102 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:27:00.774786 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:27:00.786106 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:27:00.796715 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:27:00.806826 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:27:00.814902 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:27:00.814955 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:27:00.824849 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:27:00.836659 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:27:00.854929 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:27:00.869413 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:27:00.897912 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:27:00.907815 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:27:00.917190 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:27:00.933801 jq[1430]: false Jan 13 21:27:00.934192 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:27:00.954553 coreos-metadata[1428]: Jan 13 21:27:00.953 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 13 21:27:00.954386 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:27:00.959060 coreos-metadata[1428]: Jan 13 21:27:00.958 INFO Fetch successful Jan 13 21:27:00.959060 coreos-metadata[1428]: Jan 13 21:27:00.958 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 13 21:27:00.960019 coreos-metadata[1428]: Jan 13 21:27:00.959 INFO Fetch successful Jan 13 21:27:00.960019 coreos-metadata[1428]: Jan 13 21:27:00.959 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 13 21:27:00.960488 extend-filesystems[1431]: Found loop4 Jan 13 21:27:00.960488 extend-filesystems[1431]: Found loop5 Jan 13 21:27:00.960488 extend-filesystems[1431]: Found loop6 Jan 13 21:27:00.960488 extend-filesystems[1431]: Found loop7 Jan 13 21:27:00.960488 extend-filesystems[1431]: Found sda Jan 13 21:27:01.060476 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 13 21:27:01.060540 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 13 21:27:01.060583 extend-filesystems[1431]: Found sda1 Jan 13 21:27:01.060583 extend-filesystems[1431]: Found sda2 Jan 13 21:27:01.060583 extend-filesystems[1431]: Found sda3 Jan 13 21:27:01.060583 extend-filesystems[1431]: Found usr Jan 13 21:27:01.060583 extend-filesystems[1431]: Found sda4 Jan 13 21:27:01.060583 extend-filesystems[1431]: Found sda6 Jan 13 21:27:01.060583 extend-filesystems[1431]: Found sda7 Jan 13 21:27:01.060583 extend-filesystems[1431]: Found sda9 Jan 13 21:27:01.060583 extend-filesystems[1431]: Checking size of /dev/sda9 Jan 13 21:27:01.060583 extend-filesystems[1431]: Resized partition /dev/sda9 Jan 13 21:27:01.206854 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1368) Jan 13 21:27:01.050167 ntpd[1436]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:27:01.208124 coreos-metadata[1428]: Jan 13 21:27:00.960 INFO Fetch successful Jan 13 21:27:01.208124 coreos-metadata[1428]: Jan 13 21:27:00.960 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 13 21:27:01.208124 coreos-metadata[1428]: Jan 13 21:27:00.962 INFO Fetch successful Jan 13 21:27:00.975767 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:27:01.208403 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:27:01.208403 extend-filesystems[1450]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 21:27:01.208403 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 13 21:27:01.208403 extend-filesystems[1450]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 13 21:27:01.050204 ntpd[1436]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: ---------------------------------------------------- Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: corporation. Support and training for ntp-4 are Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: available at https://www.nwtime.org/support Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: ---------------------------------------------------- Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: proto: precision = 0.076 usec (-24) Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: basedate set to 2025-01-01 Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: gps base set to 2025-01-05 (week 2348) Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: Listen normally on 3 eth0 10.128.0.96:123 Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: Listen normally on 4 lo [::1]:123 Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: bind(21) AF_INET6 fe80::4001:aff:fe80:60%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:60%2#123 Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:60%2 Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: Listening on routing socket on fd #21 for interface updates Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:27:01.258951 ntpd[1436]: 13 Jan 21:27:01 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:27:01.045883 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:27:01.261661 extend-filesystems[1431]: Resized filesystem in /dev/sda9 Jan 13 21:27:01.050220 ntpd[1436]: ---------------------------------------------------- Jan 13 21:27:01.077928 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:27:01.050234 ntpd[1436]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:27:01.100752 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 13 21:27:01.050248 ntpd[1436]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:27:01.101541 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:27:01.050265 ntpd[1436]: corporation. Support and training for ntp-4 are Jan 13 21:27:01.111617 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:27:01.050279 ntpd[1436]: available at https://www.nwtime.org/support Jan 13 21:27:01.279802 update_engine[1458]: I20250113 21:27:01.220273 1458 main.cc:92] Flatcar Update Engine starting Jan 13 21:27:01.279802 update_engine[1458]: I20250113 21:27:01.229204 1458 update_check_scheduler.cc:74] Next update check in 6m25s Jan 13 21:27:01.154897 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:27:01.050292 ntpd[1436]: ---------------------------------------------------- Jan 13 21:27:01.280383 jq[1462]: true Jan 13 21:27:01.179790 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:27:01.054345 ntpd[1436]: proto: precision = 0.076 usec (-24) Jan 13 21:27:01.212225 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:27:01.056410 dbus-daemon[1429]: [system] SELinux support is enabled Jan 13 21:27:01.213753 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:27:01.061278 ntpd[1436]: basedate set to 2025-01-01 Jan 13 21:27:01.214226 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:27:01.061307 ntpd[1436]: gps base set to 2025-01-05 (week 2348) Jan 13 21:27:01.214464 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:27:01.083254 dbus-daemon[1429]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1379 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:27:01.248277 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:27:01.086472 ntpd[1436]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:27:01.248911 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:27:01.086545 ntpd[1436]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:27:01.274377 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:27:01.086851 ntpd[1436]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:27:01.274648 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:27:01.086905 ntpd[1436]: Listen normally on 3 eth0 10.128.0.96:123 Jan 13 21:27:01.281352 systemd-logind[1454]: Watching system buttons on /dev/input/event2 (Power Button) Jan 13 21:27:01.086963 ntpd[1436]: Listen normally on 4 lo [::1]:123 Jan 13 21:27:01.281381 systemd-logind[1454]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 13 21:27:01.087028 ntpd[1436]: bind(21) AF_INET6 fe80::4001:aff:fe80:60%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:27:01.281412 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:27:01.087064 ntpd[1436]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:60%2#123 Jan 13 21:27:01.283732 systemd-logind[1454]: New seat seat0. Jan 13 21:27:01.087088 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:60%2 Jan 13 21:27:01.087157 ntpd[1436]: Listening on routing socket on fd #21 for interface updates Jan 13 21:27:01.091393 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:27:01.091442 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:27:01.289532 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:27:01.295602 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:27:01.330826 jq[1470]: true Jan 13 21:27:01.344916 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:27:01.358048 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:27:01.378812 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:27:01.404452 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:27:01.421592 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:27:01.424247 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:27:01.430359 tar[1465]: linux-amd64/helm Jan 13 21:27:01.432762 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:27:01.472005 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:27:01.497460 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:27:01.507933 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:27:01.517615 systemd[1]: Started sshd@0-10.128.0.96:22-147.75.109.163:37498.service - OpenSSH per-connection server daemon (147.75.109.163:37498). Jan 13 21:27:01.537095 systemd[1]: Starting sshkeys.service... Jan 13 21:27:01.544903 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:27:01.545231 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:27:01.574815 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:27:01.584957 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:27:01.585434 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:27:01.606084 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:27:01.628341 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:27:01.628617 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:27:01.670219 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:27:01.688397 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:27:01.709334 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:27:01.777289 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:27:01.797990 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:27:01.810622 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:27:01.815397 dbus-daemon[1429]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1512 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:27:01.817026 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:27:01.824929 coreos-metadata[1517]: Jan 13 21:27:01.820 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 13 21:27:01.828843 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:27:01.839152 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:27:01.856838 coreos-metadata[1517]: Jan 13 21:27:01.856 INFO Fetch failed with 404: resource not found Jan 13 21:27:01.857426 coreos-metadata[1517]: Jan 13 21:27:01.857 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 13 21:27:01.860696 coreos-metadata[1517]: Jan 13 21:27:01.860 INFO Fetch successful Jan 13 21:27:01.863730 coreos-metadata[1517]: Jan 13 21:27:01.862 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 13 21:27:01.866475 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:27:01.879250 coreos-metadata[1517]: Jan 13 21:27:01.878 INFO Fetch failed with 404: resource not found Jan 13 21:27:01.879250 coreos-metadata[1517]: Jan 13 21:27:01.879 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 13 21:27:01.880176 coreos-metadata[1517]: Jan 13 21:27:01.880 INFO Fetch failed with 404: resource not found Jan 13 21:27:01.880536 coreos-metadata[1517]: Jan 13 21:27:01.880 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 13 21:27:01.881621 coreos-metadata[1517]: Jan 13 21:27:01.881 INFO Fetch successful Jan 13 21:27:01.886105 unknown[1517]: wrote ssh authorized keys file for user: core Jan 13 21:27:01.935596 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:27:01.947251 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:27:01.947116 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:27:01.958708 polkitd[1527]: Started polkitd version 121 Jan 13 21:27:01.967521 systemd[1]: Finished sshkeys.service. Jan 13 21:27:01.977635 polkitd[1527]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:27:01.977768 polkitd[1527]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:27:01.983312 polkitd[1527]: Finished loading, compiling and executing 2 rules Jan 13 21:27:01.988282 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:27:01.988891 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:27:01.989614 polkitd[1527]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:27:02.031001 systemd-resolved[1324]: System hostname changed to 'ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal'. Jan 13 21:27:02.031603 systemd-hostnamed[1512]: Hostname set to (transient) Jan 13 21:27:02.051614 ntpd[1436]: bind(24) AF_INET6 fe80::4001:aff:fe80:60%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:27:02.052285 ntpd[1436]: 13 Jan 21:27:02 ntpd[1436]: bind(24) AF_INET6 fe80::4001:aff:fe80:60%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:27:02.052285 ntpd[1436]: 13 Jan 21:27:02 ntpd[1436]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:60%2#123 Jan 13 21:27:02.052285 ntpd[1436]: 13 Jan 21:27:02 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:60%2 Jan 13 21:27:02.051690 ntpd[1436]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:60%2#123 Jan 13 21:27:02.051713 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:60%2 Jan 13 21:27:02.072791 sshd[1508]: Accepted publickey for core from 147.75.109.163 port 37498 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:02.079175 sshd[1508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:02.097585 containerd[1471]: time="2025-01-13T21:27:02.094943853Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:27:02.114327 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:27:02.114514 systemd-logind[1454]: New session 1 of user core. Jan 13 21:27:02.137128 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:27:02.177293 containerd[1471]: time="2025-01-13T21:27:02.175285165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:02.178493 containerd[1471]: time="2025-01-13T21:27:02.177971791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:02.178493 containerd[1471]: time="2025-01-13T21:27:02.178033555Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:27:02.178493 containerd[1471]: time="2025-01-13T21:27:02.178063557Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:27:02.178493 containerd[1471]: time="2025-01-13T21:27:02.178288361Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:27:02.178493 containerd[1471]: time="2025-01-13T21:27:02.178320179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:02.178493 containerd[1471]: time="2025-01-13T21:27:02.178417941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:02.178493 containerd[1471]: time="2025-01-13T21:27:02.178438662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:02.178879 containerd[1471]: time="2025-01-13T21:27:02.178714104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:02.178879 containerd[1471]: time="2025-01-13T21:27:02.178744197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:02.178879 containerd[1471]: time="2025-01-13T21:27:02.178768412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:02.178879 containerd[1471]: time="2025-01-13T21:27:02.178786630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:02.179053 containerd[1471]: time="2025-01-13T21:27:02.178908012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:02.180007 containerd[1471]: time="2025-01-13T21:27:02.179221291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:02.180007 containerd[1471]: time="2025-01-13T21:27:02.179403942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:02.180007 containerd[1471]: time="2025-01-13T21:27:02.179429918Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:27:02.180007 containerd[1471]: time="2025-01-13T21:27:02.179551171Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:27:02.180007 containerd[1471]: time="2025-01-13T21:27:02.179635928Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:27:02.183849 systemd-networkd[1379]: eth0: Gained IPv6LL Jan 13 21:27:02.188442 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:27:02.190401 containerd[1471]: time="2025-01-13T21:27:02.190359872Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:27:02.191840 containerd[1471]: time="2025-01-13T21:27:02.190608459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:27:02.191840 containerd[1471]: time="2025-01-13T21:27:02.190883369Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:27:02.191840 containerd[1471]: time="2025-01-13T21:27:02.190933919Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:27:02.191840 containerd[1471]: time="2025-01-13T21:27:02.190961161Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:27:02.191840 containerd[1471]: time="2025-01-13T21:27:02.191157418Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:27:02.192110 containerd[1471]: time="2025-01-13T21:27:02.191969794Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:27:02.192273 containerd[1471]: time="2025-01-13T21:27:02.192149134Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:27:02.192273 containerd[1471]: time="2025-01-13T21:27:02.192181284Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:27:02.192273 containerd[1471]: time="2025-01-13T21:27:02.192207079Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:27:02.192273 containerd[1471]: time="2025-01-13T21:27:02.192245008Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:27:02.192626 containerd[1471]: time="2025-01-13T21:27:02.192394155Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:27:02.192626 containerd[1471]: time="2025-01-13T21:27:02.192448715Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:27:02.192626 containerd[1471]: time="2025-01-13T21:27:02.192478157Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:27:02.192626 containerd[1471]: time="2025-01-13T21:27:02.192503102Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:27:02.192626 containerd[1471]: time="2025-01-13T21:27:02.192524292Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:27:02.192626 containerd[1471]: time="2025-01-13T21:27:02.192551077Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:27:02.192626 containerd[1471]: time="2025-01-13T21:27:02.192572561Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:27:02.192626 containerd[1471]: time="2025-01-13T21:27:02.192606703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.192626 containerd[1471]: time="2025-01-13T21:27:02.192630433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.193332 containerd[1471]: time="2025-01-13T21:27:02.192650718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193557786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193595693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193641135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193686357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193711072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193733545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193761812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193783390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193809487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193831259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193858337Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193904501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193928255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194262 containerd[1471]: time="2025-01-13T21:27:02.193947202Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:27:02.194928 containerd[1471]: time="2025-01-13T21:27:02.194020981Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:27:02.194928 containerd[1471]: time="2025-01-13T21:27:02.194050966Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:27:02.194928 containerd[1471]: time="2025-01-13T21:27:02.194072898Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:27:02.194928 containerd[1471]: time="2025-01-13T21:27:02.194094756Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:27:02.194928 containerd[1471]: time="2025-01-13T21:27:02.194111406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.194928 containerd[1471]: time="2025-01-13T21:27:02.194577668Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:27:02.194928 containerd[1471]: time="2025-01-13T21:27:02.194611283Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:27:02.194928 containerd[1471]: time="2025-01-13T21:27:02.194632101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:27:02.195291 containerd[1471]: time="2025-01-13T21:27:02.195140835Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:27:02.195291 containerd[1471]: time="2025-01-13T21:27:02.195241952Z" level=info msg="Connect containerd service" Jan 13 21:27:02.195879 containerd[1471]: time="2025-01-13T21:27:02.195298644Z" level=info msg="using legacy CRI server" Jan 13 21:27:02.195879 containerd[1471]: time="2025-01-13T21:27:02.195312278Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:27:02.195879 containerd[1471]: time="2025-01-13T21:27:02.195461882Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:27:02.196521 containerd[1471]: time="2025-01-13T21:27:02.196452758Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:27:02.197589 containerd[1471]: time="2025-01-13T21:27:02.196648608Z" level=info msg="Start subscribing containerd event" Jan 13 21:27:02.197589 containerd[1471]: time="2025-01-13T21:27:02.197168620Z" level=info msg="Start recovering state" Jan 13 21:27:02.197589 containerd[1471]: time="2025-01-13T21:27:02.197273417Z" level=info msg="Start event monitor" Jan 13 21:27:02.197589 containerd[1471]: time="2025-01-13T21:27:02.197297129Z" level=info msg="Start snapshots syncer" Jan 13 21:27:02.197589 containerd[1471]: time="2025-01-13T21:27:02.197312878Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:27:02.197589 containerd[1471]: time="2025-01-13T21:27:02.197326979Z" level=info msg="Start streaming server" Jan 13 21:27:02.197589 containerd[1471]: time="2025-01-13T21:27:02.197070923Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:27:02.197589 containerd[1471]: time="2025-01-13T21:27:02.197530750Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:27:02.197983 containerd[1471]: time="2025-01-13T21:27:02.197599152Z" level=info msg="containerd successfully booted in 0.108237s" Jan 13 21:27:02.200381 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:27:02.211234 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:27:02.227657 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:27:02.248813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:02.267785 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:27:02.284326 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 13 21:27:02.308414 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:27:02.310817 init.sh[1554]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 13 21:27:02.313520 init.sh[1554]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 13 21:27:02.313520 init.sh[1554]: + /usr/bin/google_instance_setup Jan 13 21:27:02.328091 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:27:02.373186 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:27:02.531266 tar[1465]: linux-amd64/LICENSE Jan 13 21:27:02.531266 tar[1465]: linux-amd64/README.md Jan 13 21:27:02.552517 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:27:02.653630 systemd[1556]: Queued start job for default target default.target. Jan 13 21:27:02.662067 systemd[1556]: Created slice app.slice - User Application Slice. Jan 13 21:27:02.662135 systemd[1556]: Reached target paths.target - Paths. Jan 13 21:27:02.662162 systemd[1556]: Reached target timers.target - Timers. Jan 13 21:27:02.665655 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:27:02.693031 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:27:02.693721 systemd[1556]: Reached target sockets.target - Sockets. Jan 13 21:27:02.693911 systemd[1556]: Reached target basic.target - Basic System. Jan 13 21:27:02.694170 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:27:02.694660 systemd[1556]: Reached target default.target - Main User Target. Jan 13 21:27:02.694764 systemd[1556]: Startup finished in 302ms. Jan 13 21:27:02.711912 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:27:02.967396 systemd[1]: Started sshd@1-10.128.0.96:22-147.75.109.163:37508.service - OpenSSH per-connection server daemon (147.75.109.163:37508). Jan 13 21:27:03.123549 instance-setup[1559]: INFO Running google_set_multiqueue. Jan 13 21:27:03.142855 instance-setup[1559]: INFO Set channels for eth0 to 2. Jan 13 21:27:03.148146 instance-setup[1559]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 13 21:27:03.150490 instance-setup[1559]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 13 21:27:03.150559 instance-setup[1559]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 13 21:27:03.152609 instance-setup[1559]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 13 21:27:03.152705 instance-setup[1559]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 13 21:27:03.154436 instance-setup[1559]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 13 21:27:03.155093 instance-setup[1559]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 13 21:27:03.158361 instance-setup[1559]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 13 21:27:03.165618 instance-setup[1559]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 21:27:03.169892 instance-setup[1559]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 21:27:03.171568 instance-setup[1559]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 13 21:27:03.171618 instance-setup[1559]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 13 21:27:03.192926 init.sh[1554]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 13 21:27:03.298811 sshd[1580]: Accepted publickey for core from 147.75.109.163 port 37508 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:03.303915 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:03.316940 systemd-logind[1454]: New session 2 of user core. Jan 13 21:27:03.320804 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:27:03.379560 startup-script[1610]: INFO Starting startup scripts. Jan 13 21:27:03.386272 startup-script[1610]: INFO No startup scripts found in metadata. Jan 13 21:27:03.386348 startup-script[1610]: INFO Finished running startup scripts. Jan 13 21:27:03.407639 init.sh[1554]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 13 21:27:03.409342 init.sh[1554]: + daemon_pids=() Jan 13 21:27:03.409342 init.sh[1554]: + for d in accounts clock_skew network Jan 13 21:27:03.409342 init.sh[1554]: + daemon_pids+=($!) Jan 13 21:27:03.409342 init.sh[1554]: + for d in accounts clock_skew network Jan 13 21:27:03.409511 init.sh[1614]: + /usr/bin/google_accounts_daemon Jan 13 21:27:03.410898 init.sh[1554]: + daemon_pids+=($!) Jan 13 21:27:03.410898 init.sh[1554]: + for d in accounts clock_skew network Jan 13 21:27:03.410898 init.sh[1554]: + daemon_pids+=($!) Jan 13 21:27:03.410898 init.sh[1554]: + NOTIFY_SOCKET=/run/systemd/notify Jan 13 21:27:03.410898 init.sh[1554]: + /usr/bin/systemd-notify --ready Jan 13 21:27:03.411166 init.sh[1615]: + /usr/bin/google_clock_skew_daemon Jan 13 21:27:03.412875 init.sh[1616]: + /usr/bin/google_network_daemon Jan 13 21:27:03.435708 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 13 21:27:03.451007 init.sh[1554]: + wait -n 1614 1615 1616 Jan 13 21:27:03.528231 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:03.537884 systemd[1]: sshd@1-10.128.0.96:22-147.75.109.163:37508.service: Deactivated successfully. Jan 13 21:27:03.543263 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:27:03.548457 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:27:03.550301 systemd-logind[1454]: Removed session 2. Jan 13 21:27:03.589084 systemd[1]: Started sshd@2-10.128.0.96:22-147.75.109.163:37522.service - OpenSSH per-connection server daemon (147.75.109.163:37522). Jan 13 21:27:03.850538 google-networking[1616]: INFO Starting Google Networking daemon. Jan 13 21:27:03.868421 google-clock-skew[1615]: INFO Starting Google Clock Skew daemon. Jan 13 21:27:03.877206 google-clock-skew[1615]: INFO Clock drift token has changed: 0. Jan 13 21:27:03.921712 sshd[1622]: Accepted publickey for core from 147.75.109.163 port 37522 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:03.924386 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:03.931845 systemd-logind[1454]: New session 3 of user core. Jan 13 21:27:03.933804 groupadd[1632]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 13 21:27:03.937929 groupadd[1632]: group added to /etc/gshadow: name=google-sudoers Jan 13 21:27:03.938685 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:27:03.993136 groupadd[1632]: new group: name=google-sudoers, GID=1000 Jan 13 21:27:04.029734 google-accounts[1614]: INFO Starting Google Accounts daemon. Jan 13 21:27:04.040816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:04.042840 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:27:04.000085 systemd-resolved[1324]: Clock change detected. Flushing caches. Jan 13 21:27:04.024029 systemd-journald[1115]: Time jumped backwards, rotating. Jan 13 21:27:04.026482 init.sh[1647]: useradd: invalid user name '0': use --badname to ignore Jan 13 21:27:04.001276 google-clock-skew[1615]: INFO Synced system time with hardware clock. Jan 13 21:27:04.026097 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:27:04.003937 google-accounts[1614]: WARNING OS Login not installed. Jan 13 21:27:04.005659 google-accounts[1614]: INFO Creating a new user account for 0. Jan 13 21:27:04.013143 google-accounts[1614]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 13 21:27:04.036511 systemd[1]: Startup finished in 1.011s (kernel) + 8.526s (initrd) + 8.861s (userspace) = 18.400s. Jan 13 21:27:04.110278 sshd[1622]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:04.115995 systemd[1]: sshd@2-10.128.0.96:22-147.75.109.163:37522.service: Deactivated successfully. Jan 13 21:27:04.118825 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:27:04.119880 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:27:04.121677 systemd-logind[1454]: Removed session 3. Jan 13 21:27:04.805038 kubelet[1645]: E0113 21:27:04.804969 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:27:04.806886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:27:04.807145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:27:04.807573 systemd[1]: kubelet.service: Consumed 1.236s CPU time. Jan 13 21:27:05.007589 ntpd[1436]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:60%2]:123 Jan 13 21:27:05.008034 ntpd[1436]: 13 Jan 21:27:05 ntpd[1436]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:60%2]:123 Jan 13 21:27:14.170446 systemd[1]: Started sshd@3-10.128.0.96:22-147.75.109.163:47922.service - OpenSSH per-connection server daemon (147.75.109.163:47922). Jan 13 21:27:14.456483 sshd[1664]: Accepted publickey for core from 147.75.109.163 port 47922 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:14.458342 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:14.463453 systemd-logind[1454]: New session 4 of user core. Jan 13 21:27:14.474298 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:27:14.674255 sshd[1664]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:14.678689 systemd[1]: sshd@3-10.128.0.96:22-147.75.109.163:47922.service: Deactivated successfully. Jan 13 21:27:14.680886 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:27:14.682717 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:27:14.684303 systemd-logind[1454]: Removed session 4. Jan 13 21:27:14.728443 systemd[1]: Started sshd@4-10.128.0.96:22-147.75.109.163:47932.service - OpenSSH per-connection server daemon (147.75.109.163:47932). Jan 13 21:27:14.926034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:27:14.935636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:15.014537 sshd[1671]: Accepted publickey for core from 147.75.109.163 port 47932 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:15.015406 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:15.021442 systemd-logind[1454]: New session 5 of user core. Jan 13 21:27:15.028246 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:27:15.215817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:15.222915 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:27:15.223943 sshd[1671]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:15.231868 systemd[1]: sshd@4-10.128.0.96:22-147.75.109.163:47932.service: Deactivated successfully. Jan 13 21:27:15.235791 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:27:15.236917 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:27:15.238869 systemd-logind[1454]: Removed session 5. Jan 13 21:27:15.279204 systemd[1]: Started sshd@5-10.128.0.96:22-147.75.109.163:47938.service - OpenSSH per-connection server daemon (147.75.109.163:47938). Jan 13 21:27:15.291066 kubelet[1683]: E0113 21:27:15.290992 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:27:15.295736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:27:15.295979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:27:15.568155 sshd[1692]: Accepted publickey for core from 147.75.109.163 port 47938 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:15.569934 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:15.576336 systemd-logind[1454]: New session 6 of user core. Jan 13 21:27:15.581257 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:27:15.782109 sshd[1692]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:15.787342 systemd[1]: sshd@5-10.128.0.96:22-147.75.109.163:47938.service: Deactivated successfully. Jan 13 21:27:15.789449 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:27:15.790371 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:27:15.791770 systemd-logind[1454]: Removed session 6. Jan 13 21:27:15.837431 systemd[1]: Started sshd@6-10.128.0.96:22-147.75.109.163:47942.service - OpenSSH per-connection server daemon (147.75.109.163:47942). Jan 13 21:27:16.126461 sshd[1700]: Accepted publickey for core from 147.75.109.163 port 47942 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:16.128300 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:16.134892 systemd-logind[1454]: New session 7 of user core. Jan 13 21:27:16.141332 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:27:16.321355 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:27:16.321853 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:27:16.338944 sudo[1703]: pam_unix(sudo:session): session closed for user root Jan 13 21:27:16.382574 sshd[1700]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:16.387955 systemd[1]: sshd@6-10.128.0.96:22-147.75.109.163:47942.service: Deactivated successfully. Jan 13 21:27:16.390439 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:27:16.392193 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:27:16.393903 systemd-logind[1454]: Removed session 7. Jan 13 21:27:16.439301 systemd[1]: Started sshd@7-10.128.0.96:22-147.75.109.163:47956.service - OpenSSH per-connection server daemon (147.75.109.163:47956). Jan 13 21:27:16.738226 sshd[1708]: Accepted publickey for core from 147.75.109.163 port 47956 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:16.740109 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:16.746380 systemd-logind[1454]: New session 8 of user core. Jan 13 21:27:16.761284 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:27:16.919947 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:27:16.920572 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:27:16.926463 sudo[1712]: pam_unix(sudo:session): session closed for user root Jan 13 21:27:16.940223 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:27:16.940707 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:27:16.963531 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:27:16.966088 auditctl[1715]: No rules Jan 13 21:27:16.966673 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:27:16.966942 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:27:16.970749 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:27:17.009925 augenrules[1733]: No rules Jan 13 21:27:17.011265 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:27:17.012879 sudo[1711]: pam_unix(sudo:session): session closed for user root Jan 13 21:27:17.056957 sshd[1708]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:17.062433 systemd[1]: sshd@7-10.128.0.96:22-147.75.109.163:47956.service: Deactivated successfully. Jan 13 21:27:17.064671 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:27:17.065586 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:27:17.066930 systemd-logind[1454]: Removed session 8. Jan 13 21:27:17.112352 systemd[1]: Started sshd@8-10.128.0.96:22-147.75.109.163:47964.service - OpenSSH per-connection server daemon (147.75.109.163:47964). Jan 13 21:27:17.408438 sshd[1741]: Accepted publickey for core from 147.75.109.163 port 47964 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:17.409745 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:17.416132 systemd-logind[1454]: New session 9 of user core. Jan 13 21:27:17.426296 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:27:17.587569 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:27:17.588106 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:27:18.023467 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:27:18.035655 (dockerd)[1760]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:27:18.474095 dockerd[1760]: time="2025-01-13T21:27:18.474007532Z" level=info msg="Starting up" Jan 13 21:27:18.614666 dockerd[1760]: time="2025-01-13T21:27:18.614615278Z" level=info msg="Loading containers: start." Jan 13 21:27:18.750350 kernel: Initializing XFRM netlink socket Jan 13 21:27:18.856482 systemd-networkd[1379]: docker0: Link UP Jan 13 21:27:18.878468 dockerd[1760]: time="2025-01-13T21:27:18.878410314Z" level=info msg="Loading containers: done." Jan 13 21:27:18.901230 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3456156971-merged.mount: Deactivated successfully. Jan 13 21:27:18.902560 dockerd[1760]: time="2025-01-13T21:27:18.901906379Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:27:18.902560 dockerd[1760]: time="2025-01-13T21:27:18.902105704Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:27:18.902560 dockerd[1760]: time="2025-01-13T21:27:18.902264020Z" level=info msg="Daemon has completed initialization" Jan 13 21:27:18.947292 dockerd[1760]: time="2025-01-13T21:27:18.946787071Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:27:18.947183 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:27:22.316357 containerd[1471]: time="2025-01-13T21:27:22.316305581Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 21:27:22.873274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661050075.mount: Deactivated successfully. Jan 13 21:27:24.326372 containerd[1471]: time="2025-01-13T21:27:24.326300933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:24.328011 containerd[1471]: time="2025-01-13T21:27:24.327945107Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27982111" Jan 13 21:27:24.329020 containerd[1471]: time="2025-01-13T21:27:24.328949504Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:24.333442 containerd[1471]: time="2025-01-13T21:27:24.332778230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:24.338077 containerd[1471]: time="2025-01-13T21:27:24.337351342Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.02099227s" Jan 13 21:27:24.338077 containerd[1471]: time="2025-01-13T21:27:24.337404530Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 13 21:27:24.341528 containerd[1471]: time="2025-01-13T21:27:24.341495065Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 21:27:25.426756 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:27:25.435305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:25.730888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:25.743729 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:27:25.815958 containerd[1471]: time="2025-01-13T21:27:25.815428757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:25.819369 containerd[1471]: time="2025-01-13T21:27:25.819253228Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24704091" Jan 13 21:27:25.822077 containerd[1471]: time="2025-01-13T21:27:25.821280608Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:25.823082 kubelet[1962]: E0113 21:27:25.823001 1962 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:27:25.826801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:27:25.827130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:27:25.829800 containerd[1471]: time="2025-01-13T21:27:25.827218700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:25.829800 containerd[1471]: time="2025-01-13T21:27:25.828875045Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.487335807s" Jan 13 21:27:25.829800 containerd[1471]: time="2025-01-13T21:27:25.828916444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 13 21:27:25.830378 containerd[1471]: time="2025-01-13T21:27:25.830348177Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 21:27:26.954612 containerd[1471]: time="2025-01-13T21:27:26.954540491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:26.956240 containerd[1471]: time="2025-01-13T21:27:26.956170062Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18653983" Jan 13 21:27:26.957419 containerd[1471]: time="2025-01-13T21:27:26.957347712Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:26.960978 containerd[1471]: time="2025-01-13T21:27:26.960916887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:26.962472 containerd[1471]: time="2025-01-13T21:27:26.962308919Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.131814104s" Jan 13 21:27:26.962472 containerd[1471]: time="2025-01-13T21:27:26.962355808Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 13 21:27:26.963468 containerd[1471]: time="2025-01-13T21:27:26.963337808Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:27:27.998679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192039669.mount: Deactivated successfully. Jan 13 21:27:28.608393 containerd[1471]: time="2025-01-13T21:27:28.608319920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:28.609625 containerd[1471]: time="2025-01-13T21:27:28.609555586Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30232138" Jan 13 21:27:28.611041 containerd[1471]: time="2025-01-13T21:27:28.610965108Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:28.614003 containerd[1471]: time="2025-01-13T21:27:28.613966851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:28.615280 containerd[1471]: time="2025-01-13T21:27:28.615099441Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 1.651688354s" Jan 13 21:27:28.615280 containerd[1471]: time="2025-01-13T21:27:28.615147181Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 21:27:28.616218 containerd[1471]: time="2025-01-13T21:27:28.615967318Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:27:29.033769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount572794324.mount: Deactivated successfully. Jan 13 21:27:30.094064 containerd[1471]: time="2025-01-13T21:27:30.093978739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:30.095660 containerd[1471]: time="2025-01-13T21:27:30.095594737Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 13 21:27:30.096842 containerd[1471]: time="2025-01-13T21:27:30.096773061Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:30.100662 containerd[1471]: time="2025-01-13T21:27:30.100601063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:30.102419 containerd[1471]: time="2025-01-13T21:27:30.102206455Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.486192034s" Jan 13 21:27:30.102419 containerd[1471]: time="2025-01-13T21:27:30.102254756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:27:30.104065 containerd[1471]: time="2025-01-13T21:27:30.103667580Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 21:27:30.501943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430787122.mount: Deactivated successfully. Jan 13 21:27:30.509894 containerd[1471]: time="2025-01-13T21:27:30.509827048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:30.511137 containerd[1471]: time="2025-01-13T21:27:30.511068358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jan 13 21:27:30.512292 containerd[1471]: time="2025-01-13T21:27:30.512222139Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:30.515348 containerd[1471]: time="2025-01-13T21:27:30.515283504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:30.516931 containerd[1471]: time="2025-01-13T21:27:30.516310514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 412.600427ms" Jan 13 21:27:30.516931 containerd[1471]: time="2025-01-13T21:27:30.516357590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 13 21:27:30.517294 containerd[1471]: time="2025-01-13T21:27:30.517223948Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 21:27:30.940960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4014007769.mount: Deactivated successfully. Jan 13 21:27:32.026424 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:27:33.009314 containerd[1471]: time="2025-01-13T21:27:33.009247205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:33.011067 containerd[1471]: time="2025-01-13T21:27:33.010983447Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786556" Jan 13 21:27:33.012113 containerd[1471]: time="2025-01-13T21:27:33.012017767Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:33.015665 containerd[1471]: time="2025-01-13T21:27:33.015626559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:33.017572 containerd[1471]: time="2025-01-13T21:27:33.017260872Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.499991663s" Jan 13 21:27:33.017572 containerd[1471]: time="2025-01-13T21:27:33.017310454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 13 21:27:35.926435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:27:35.938183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:36.257319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:36.267590 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:27:36.335505 kubelet[2114]: E0113 21:27:36.335443 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:27:36.338918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:27:36.339896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:27:37.086146 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:37.100558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:37.146913 systemd[1]: Reloading requested from client PID 2128 ('systemctl') (unit session-9.scope)... Jan 13 21:27:37.146931 systemd[1]: Reloading... Jan 13 21:27:37.297118 zram_generator::config[2168]: No configuration found. Jan 13 21:27:37.443442 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:27:37.543356 systemd[1]: Reloading finished in 395 ms. Jan 13 21:27:37.599334 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:27:37.599473 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:27:37.599797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:37.604582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:37.894713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:37.908619 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:27:37.969260 kubelet[2218]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:27:37.969260 kubelet[2218]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:27:37.969260 kubelet[2218]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:27:37.969260 kubelet[2218]: I0113 21:27:37.968120 2218 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:27:38.669375 kubelet[2218]: I0113 21:27:38.669315 2218 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:27:38.669375 kubelet[2218]: I0113 21:27:38.669353 2218 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:27:38.669744 kubelet[2218]: I0113 21:27:38.669706 2218 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:27:38.704095 kubelet[2218]: E0113 21:27:38.703631 2218 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:27:38.709382 kubelet[2218]: I0113 21:27:38.708733 2218 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:27:38.721450 kubelet[2218]: E0113 21:27:38.721376 2218 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:27:38.721450 kubelet[2218]: I0113 21:27:38.721419 2218 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:27:38.729277 kubelet[2218]: I0113 21:27:38.729244 2218 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:27:38.731462 kubelet[2218]: I0113 21:27:38.731414 2218 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:27:38.731719 kubelet[2218]: I0113 21:27:38.731667 2218 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:27:38.731955 kubelet[2218]: I0113 21:27:38.731703 2218 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:27:38.731955 kubelet[2218]: I0113 21:27:38.731941 2218 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:27:38.731955 kubelet[2218]: I0113 21:27:38.731957 2218 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:27:38.732237 kubelet[2218]: I0113 21:27:38.732190 2218 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:27:38.736936 kubelet[2218]: I0113 21:27:38.736889 2218 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:27:38.736936 kubelet[2218]: I0113 21:27:38.736936 2218 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:27:38.737123 kubelet[2218]: I0113 21:27:38.736982 2218 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:27:38.737123 kubelet[2218]: I0113 21:27:38.737003 2218 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:27:38.741748 kubelet[2218]: W0113 21:27:38.741410 2218 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.96:6443: connect: connection refused Jan 13 21:27:38.741748 kubelet[2218]: E0113 21:27:38.741497 2218 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:27:38.745559 kubelet[2218]: W0113 21:27:38.745327 2218 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.96:6443: connect: connection refused Jan 13 21:27:38.745559 kubelet[2218]: E0113 21:27:38.745402 2218 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:27:38.745978 kubelet[2218]: I0113 21:27:38.745926 2218 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:27:38.749500 kubelet[2218]: I0113 21:27:38.748748 2218 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:27:38.749500 kubelet[2218]: W0113 21:27:38.748839 2218 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:27:38.751466 kubelet[2218]: I0113 21:27:38.751271 2218 server.go:1269] "Started kubelet" Jan 13 21:27:38.753246 kubelet[2218]: I0113 21:27:38.753208 2218 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:27:38.754724 kubelet[2218]: I0113 21:27:38.754517 2218 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:27:38.758347 kubelet[2218]: I0113 21:27:38.757433 2218 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:27:38.760404 kubelet[2218]: I0113 21:27:38.760333 2218 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:27:38.760811 kubelet[2218]: I0113 21:27:38.760646 2218 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:27:38.764947 kubelet[2218]: E0113 21:27:38.760875 2218 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.96:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.96:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal.181a5db9c4771f8d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,UID:ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:27:38.751221645 +0000 UTC m=+0.837284191,LastTimestamp:2025-01-13 21:27:38.751221645 +0000 UTC m=+0.837284191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,}" Jan 13 21:27:38.767598 kubelet[2218]: I0113 21:27:38.766212 2218 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:27:38.767598 kubelet[2218]: I0113 21:27:38.767481 2218 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:27:38.768115 kubelet[2218]: E0113 21:27:38.767963 2218 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" not found" Jan 13 21:27:38.768653 kubelet[2218]: I0113 21:27:38.768629 2218 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:27:38.768739 kubelet[2218]: I0113 21:27:38.768715 2218 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:27:38.774076 kubelet[2218]: I0113 21:27:38.772765 2218 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:27:38.774076 kubelet[2218]: I0113 21:27:38.772893 2218 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:27:38.775354 kubelet[2218]: W0113 21:27:38.775299 2218 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.96:6443: connect: connection refused Jan 13 21:27:38.775501 kubelet[2218]: E0113 21:27:38.775476 2218 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:27:38.775735 kubelet[2218]: E0113 21:27:38.775704 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.96:6443: connect: connection refused" interval="200ms" Jan 13 21:27:38.777942 kubelet[2218]: I0113 21:27:38.777908 2218 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:27:38.791670 kubelet[2218]: E0113 21:27:38.791622 2218 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:27:38.800649 kubelet[2218]: I0113 21:27:38.800579 2218 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:27:38.802190 kubelet[2218]: I0113 21:27:38.802133 2218 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:27:38.802190 kubelet[2218]: I0113 21:27:38.802164 2218 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:27:38.802190 kubelet[2218]: I0113 21:27:38.802189 2218 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:27:38.802387 kubelet[2218]: E0113 21:27:38.802250 2218 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:27:38.812604 kubelet[2218]: W0113 21:27:38.812412 2218 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.96:6443: connect: connection refused Jan 13 21:27:38.812604 kubelet[2218]: E0113 21:27:38.812491 2218 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:27:38.821795 kubelet[2218]: I0113 21:27:38.821761 2218 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:27:38.821795 kubelet[2218]: I0113 21:27:38.821789 2218 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:27:38.822013 kubelet[2218]: I0113 21:27:38.821831 2218 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:27:38.825605 kubelet[2218]: I0113 21:27:38.825569 2218 policy_none.go:49] "None policy: Start" Jan 13 21:27:38.826717 kubelet[2218]: I0113 21:27:38.826590 2218 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:27:38.826717 kubelet[2218]: I0113 21:27:38.826627 2218 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:27:38.834360 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:27:38.844614 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:27:38.849095 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:27:38.860036 kubelet[2218]: I0113 21:27:38.860008 2218 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:27:38.861181 kubelet[2218]: I0113 21:27:38.860684 2218 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:27:38.861181 kubelet[2218]: I0113 21:27:38.860705 2218 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:27:38.861181 kubelet[2218]: I0113 21:27:38.861133 2218 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:27:38.864120 kubelet[2218]: E0113 21:27:38.864034 2218 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" not found" Jan 13 21:27:38.920522 systemd[1]: Created slice kubepods-burstable-podb5cd67c9117fd9a22916ce93212dcb4b.slice - libcontainer container kubepods-burstable-podb5cd67c9117fd9a22916ce93212dcb4b.slice. Jan 13 21:27:38.946393 systemd[1]: Created slice kubepods-burstable-podcfed03fb45fe0965b783a0cc3dc0a804.slice - libcontainer container kubepods-burstable-podcfed03fb45fe0965b783a0cc3dc0a804.slice. Jan 13 21:27:38.952822 systemd[1]: Created slice kubepods-burstable-pod1423c8f3ffa81c766af791de15562452.slice - libcontainer container kubepods-burstable-pod1423c8f3ffa81c766af791de15562452.slice. Jan 13 21:27:38.970024 kubelet[2218]: I0113 21:27:38.969598 2218 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfed03fb45fe0965b783a0cc3dc0a804-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"cfed03fb45fe0965b783a0cc3dc0a804\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:38.970024 kubelet[2218]: I0113 21:27:38.969649 2218 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfed03fb45fe0965b783a0cc3dc0a804-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"cfed03fb45fe0965b783a0cc3dc0a804\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:38.970024 kubelet[2218]: I0113 21:27:38.969610 2218 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:38.970024 kubelet[2218]: I0113 21:27:38.969681 2218 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cfed03fb45fe0965b783a0cc3dc0a804-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"cfed03fb45fe0965b783a0cc3dc0a804\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:38.970024 kubelet[2218]: I0113 21:27:38.969735 2218 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5cd67c9117fd9a22916ce93212dcb4b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"b5cd67c9117fd9a22916ce93212dcb4b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:38.970639 kubelet[2218]: I0113 21:27:38.969766 2218 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfed03fb45fe0965b783a0cc3dc0a804-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"cfed03fb45fe0965b783a0cc3dc0a804\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:38.970639 kubelet[2218]: I0113 21:27:38.969795 2218 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cfed03fb45fe0965b783a0cc3dc0a804-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"cfed03fb45fe0965b783a0cc3dc0a804\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:38.970639 kubelet[2218]: I0113 21:27:38.969820 2218 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1423c8f3ffa81c766af791de15562452-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"1423c8f3ffa81c766af791de15562452\") " pod="kube-system/kube-scheduler-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:38.970639 kubelet[2218]: I0113 21:27:38.969879 2218 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5cd67c9117fd9a22916ce93212dcb4b-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"b5cd67c9117fd9a22916ce93212dcb4b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:38.970762 kubelet[2218]: I0113 21:27:38.969918 2218 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5cd67c9117fd9a22916ce93212dcb4b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"b5cd67c9117fd9a22916ce93212dcb4b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:38.970762 kubelet[2218]: E0113 21:27:38.969989 2218 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.96:6443/api/v1/nodes\": dial tcp 10.128.0.96:6443: connect: connection refused" node="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:38.976621 kubelet[2218]: E0113 21:27:38.976569 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.96:6443: connect: connection refused" interval="400ms" Jan 13 21:27:39.175152 kubelet[2218]: I0113 21:27:39.175002 2218 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:39.175470 kubelet[2218]: E0113 21:27:39.175412 2218 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.96:6443/api/v1/nodes\": dial tcp 10.128.0.96:6443: connect: connection refused" node="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:39.243200 containerd[1471]: time="2025-01-13T21:27:39.243146100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,Uid:b5cd67c9117fd9a22916ce93212dcb4b,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:39.251349 containerd[1471]: time="2025-01-13T21:27:39.250996370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,Uid:cfed03fb45fe0965b783a0cc3dc0a804,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:39.256798 containerd[1471]: time="2025-01-13T21:27:39.256749383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,Uid:1423c8f3ffa81c766af791de15562452,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:39.377534 kubelet[2218]: E0113 21:27:39.377470 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.96:6443: connect: connection refused" interval="800ms" Jan 13 21:27:39.579598 kubelet[2218]: I0113 21:27:39.579560 2218 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:39.580015 kubelet[2218]: E0113 21:27:39.579977 2218 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.96:6443/api/v1/nodes\": dial tcp 10.128.0.96:6443: connect: connection refused" node="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:39.606581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3800645077.mount: Deactivated successfully. Jan 13 21:27:39.614363 containerd[1471]: time="2025-01-13T21:27:39.614204241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:27:39.615471 containerd[1471]: time="2025-01-13T21:27:39.615429636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:27:39.616520 containerd[1471]: time="2025-01-13T21:27:39.616476621Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:27:39.617715 containerd[1471]: time="2025-01-13T21:27:39.617654450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:27:39.618543 containerd[1471]: time="2025-01-13T21:27:39.618448417Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 13 21:27:39.619734 containerd[1471]: time="2025-01-13T21:27:39.619678727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:27:39.620849 containerd[1471]: time="2025-01-13T21:27:39.620780158Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:27:39.624960 containerd[1471]: time="2025-01-13T21:27:39.624890406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:27:39.626657 containerd[1471]: time="2025-01-13T21:27:39.626030104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 374.924853ms" Jan 13 21:27:39.629895 kubelet[2218]: E0113 21:27:39.629752 2218 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.96:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.96:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal.181a5db9c4771f8d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,UID:ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:27:38.751221645 +0000 UTC m=+0.837284191,LastTimestamp:2025-01-13 21:27:38.751221645 +0000 UTC m=+0.837284191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,}" Jan 13 21:27:39.631763 containerd[1471]: time="2025-01-13T21:27:39.630971343Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 374.127924ms" Jan 13 21:27:39.635084 containerd[1471]: time="2025-01-13T21:27:39.634716684Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 391.461051ms" Jan 13 21:27:39.703615 kubelet[2218]: W0113 21:27:39.703448 2218 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.96:6443: connect: connection refused Jan 13 21:27:39.703615 kubelet[2218]: E0113 21:27:39.703548 2218 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:27:39.846503 containerd[1471]: time="2025-01-13T21:27:39.845367917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:39.846503 containerd[1471]: time="2025-01-13T21:27:39.845481967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:39.846503 containerd[1471]: time="2025-01-13T21:27:39.845518701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.846503 containerd[1471]: time="2025-01-13T21:27:39.845715300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.848223 containerd[1471]: time="2025-01-13T21:27:39.847813717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:39.848223 containerd[1471]: time="2025-01-13T21:27:39.847893704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:39.848223 containerd[1471]: time="2025-01-13T21:27:39.847922543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.849453 containerd[1471]: time="2025-01-13T21:27:39.848966948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.850282 containerd[1471]: time="2025-01-13T21:27:39.849661813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:39.850282 containerd[1471]: time="2025-01-13T21:27:39.849731291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:39.850282 containerd[1471]: time="2025-01-13T21:27:39.849769133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.850282 containerd[1471]: time="2025-01-13T21:27:39.849928162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:39.873596 kubelet[2218]: W0113 21:27:39.873546 2218 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.96:6443: connect: connection refused Jan 13 21:27:39.873775 kubelet[2218]: E0113 21:27:39.873615 2218 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:27:39.892936 systemd[1]: Started cri-containerd-b44e55a6defcb9f8e23ca74c2d6d4de4f4a124a771e3128496df6a1d6f72009e.scope - libcontainer container b44e55a6defcb9f8e23ca74c2d6d4de4f4a124a771e3128496df6a1d6f72009e. Jan 13 21:27:39.902485 systemd[1]: Started cri-containerd-3415f75f88a772dfbe675a0bf6f1d73234c7bae6ca64c8f7641c52bfd675ac0a.scope - libcontainer container 3415f75f88a772dfbe675a0bf6f1d73234c7bae6ca64c8f7641c52bfd675ac0a. Jan 13 21:27:39.925306 systemd[1]: Started cri-containerd-e4267520909ef9a6ae95d9598f459863bd452ae0d9cced6ebd95fd6468de8b87.scope - libcontainer container e4267520909ef9a6ae95d9598f459863bd452ae0d9cced6ebd95fd6468de8b87. Jan 13 21:27:39.963380 kubelet[2218]: W0113 21:27:39.963331 2218 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.96:6443: connect: connection refused Jan 13 21:27:39.963540 kubelet[2218]: E0113 21:27:39.963394 2218 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:27:40.004422 containerd[1471]: time="2025-01-13T21:27:40.004361467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,Uid:b5cd67c9117fd9a22916ce93212dcb4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b44e55a6defcb9f8e23ca74c2d6d4de4f4a124a771e3128496df6a1d6f72009e\"" Jan 13 21:27:40.009649 kubelet[2218]: E0113 21:27:40.009523 2218 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-21291" Jan 13 21:27:40.013202 containerd[1471]: time="2025-01-13T21:27:40.012953066Z" level=info msg="CreateContainer within sandbox \"b44e55a6defcb9f8e23ca74c2d6d4de4f4a124a771e3128496df6a1d6f72009e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:27:40.029501 containerd[1471]: time="2025-01-13T21:27:40.029455991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,Uid:cfed03fb45fe0965b783a0cc3dc0a804,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4267520909ef9a6ae95d9598f459863bd452ae0d9cced6ebd95fd6468de8b87\"" Jan 13 21:27:40.034243 kubelet[2218]: E0113 21:27:40.034086 2218 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flat" Jan 13 21:27:40.037966 containerd[1471]: time="2025-01-13T21:27:40.037923652Z" level=info msg="CreateContainer within sandbox \"e4267520909ef9a6ae95d9598f459863bd452ae0d9cced6ebd95fd6468de8b87\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:27:40.047293 containerd[1471]: time="2025-01-13T21:27:40.047247673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal,Uid:1423c8f3ffa81c766af791de15562452,Namespace:kube-system,Attempt:0,} returns sandbox id \"3415f75f88a772dfbe675a0bf6f1d73234c7bae6ca64c8f7641c52bfd675ac0a\"" Jan 13 21:27:40.049658 kubelet[2218]: E0113 21:27:40.049616 2218 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-21291" Jan 13 21:27:40.051476 containerd[1471]: time="2025-01-13T21:27:40.051327828Z" level=info msg="CreateContainer within sandbox \"b44e55a6defcb9f8e23ca74c2d6d4de4f4a124a771e3128496df6a1d6f72009e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9b284dfb698adcb6f008c6f9d2d32600838401b0039e2b57795c1f25eab842dc\"" Jan 13 21:27:40.052429 containerd[1471]: time="2025-01-13T21:27:40.052333893Z" level=info msg="StartContainer for \"9b284dfb698adcb6f008c6f9d2d32600838401b0039e2b57795c1f25eab842dc\"" Jan 13 21:27:40.052770 containerd[1471]: time="2025-01-13T21:27:40.052592045Z" level=info msg="CreateContainer within sandbox \"3415f75f88a772dfbe675a0bf6f1d73234c7bae6ca64c8f7641c52bfd675ac0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:27:40.065484 containerd[1471]: time="2025-01-13T21:27:40.065361944Z" level=info msg="CreateContainer within sandbox \"e4267520909ef9a6ae95d9598f459863bd452ae0d9cced6ebd95fd6468de8b87\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"95a5bd2bee9824b9d490ca2053ba2aa71356e5d10b1a0f5019bebe6c6b871f20\"" Jan 13 21:27:40.066991 containerd[1471]: time="2025-01-13T21:27:40.066898209Z" level=info msg="StartContainer for \"95a5bd2bee9824b9d490ca2053ba2aa71356e5d10b1a0f5019bebe6c6b871f20\"" Jan 13 21:27:40.078559 containerd[1471]: time="2025-01-13T21:27:40.078438494Z" level=info msg="CreateContainer within sandbox \"3415f75f88a772dfbe675a0bf6f1d73234c7bae6ca64c8f7641c52bfd675ac0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"17a4fb9cf1781c794028c979ad9ef969f8d39a858882d6d3f995b078d1b33378\"" Jan 13 21:27:40.079171 containerd[1471]: time="2025-01-13T21:27:40.079136997Z" level=info msg="StartContainer for \"17a4fb9cf1781c794028c979ad9ef969f8d39a858882d6d3f995b078d1b33378\"" Jan 13 21:27:40.105540 systemd[1]: Started cri-containerd-9b284dfb698adcb6f008c6f9d2d32600838401b0039e2b57795c1f25eab842dc.scope - libcontainer container 9b284dfb698adcb6f008c6f9d2d32600838401b0039e2b57795c1f25eab842dc. Jan 13 21:27:40.140307 systemd[1]: Started cri-containerd-95a5bd2bee9824b9d490ca2053ba2aa71356e5d10b1a0f5019bebe6c6b871f20.scope - libcontainer container 95a5bd2bee9824b9d490ca2053ba2aa71356e5d10b1a0f5019bebe6c6b871f20. Jan 13 21:27:40.154416 systemd[1]: Started cri-containerd-17a4fb9cf1781c794028c979ad9ef969f8d39a858882d6d3f995b078d1b33378.scope - libcontainer container 17a4fb9cf1781c794028c979ad9ef969f8d39a858882d6d3f995b078d1b33378. Jan 13 21:27:40.178495 kubelet[2218]: E0113 21:27:40.178308 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.96:6443: connect: connection refused" interval="1.6s" Jan 13 21:27:40.234413 containerd[1471]: time="2025-01-13T21:27:40.234219611Z" level=info msg="StartContainer for \"9b284dfb698adcb6f008c6f9d2d32600838401b0039e2b57795c1f25eab842dc\" returns successfully" Jan 13 21:27:40.249016 containerd[1471]: time="2025-01-13T21:27:40.248951102Z" level=info msg="StartContainer for \"95a5bd2bee9824b9d490ca2053ba2aa71356e5d10b1a0f5019bebe6c6b871f20\" returns successfully" Jan 13 21:27:40.278868 kubelet[2218]: W0113 21:27:40.278753 2218 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.96:6443: connect: connection refused Jan 13 21:27:40.279070 kubelet[2218]: E0113 21:27:40.278881 2218 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:27:40.304124 containerd[1471]: time="2025-01-13T21:27:40.303717680Z" level=info msg="StartContainer for \"17a4fb9cf1781c794028c979ad9ef969f8d39a858882d6d3f995b078d1b33378\" returns successfully" Jan 13 21:27:40.387626 kubelet[2218]: I0113 21:27:40.386981 2218 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:43.460182 kubelet[2218]: I0113 21:27:43.460116 2218 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:43.520254 kubelet[2218]: E0113 21:27:43.520199 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 13 21:27:43.744296 kubelet[2218]: I0113 21:27:43.744138 2218 apiserver.go:52] "Watching apiserver" Jan 13 21:27:43.769364 kubelet[2218]: I0113 21:27:43.769287 2218 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:27:45.395743 systemd[1]: Reloading requested from client PID 2491 ('systemctl') (unit session-9.scope)... Jan 13 21:27:45.395768 systemd[1]: Reloading... Jan 13 21:27:45.529087 zram_generator::config[2532]: No configuration found. Jan 13 21:27:45.654989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:27:45.776086 systemd[1]: Reloading finished in 379 ms. Jan 13 21:27:45.827299 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:45.838716 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:27:45.839010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:45.839112 systemd[1]: kubelet.service: Consumed 1.288s CPU time, 118.2M memory peak, 0B memory swap peak. Jan 13 21:27:45.845411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:46.118232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:46.133686 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:27:46.219087 kubelet[2579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:27:46.219087 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:27:46.219087 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:27:46.219087 kubelet[2579]: I0113 21:27:46.218608 2579 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:27:46.228675 kubelet[2579]: I0113 21:27:46.228632 2579 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:27:46.228675 kubelet[2579]: I0113 21:27:46.228668 2579 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:27:46.230079 kubelet[2579]: I0113 21:27:46.229010 2579 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:27:46.234804 kubelet[2579]: I0113 21:27:46.234770 2579 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:27:46.246804 kubelet[2579]: I0113 21:27:46.245836 2579 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:27:46.251890 kubelet[2579]: E0113 21:27:46.251827 2579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:27:46.252010 kubelet[2579]: I0113 21:27:46.251900 2579 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:27:46.256382 kubelet[2579]: I0113 21:27:46.256357 2579 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:27:46.256771 kubelet[2579]: I0113 21:27:46.256580 2579 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:27:46.256983 kubelet[2579]: I0113 21:27:46.256825 2579 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:27:46.257642 kubelet[2579]: I0113 21:27:46.256864 2579 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:27:46.257642 kubelet[2579]: I0113 21:27:46.257193 2579 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:27:46.257642 kubelet[2579]: I0113 21:27:46.257212 2579 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:27:46.257642 kubelet[2579]: I0113 21:27:46.257314 2579 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:27:46.257948 kubelet[2579]: I0113 21:27:46.257499 2579 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:27:46.257948 kubelet[2579]: I0113 21:27:46.257519 2579 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:27:46.258880 kubelet[2579]: I0113 21:27:46.258370 2579 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:27:46.260095 kubelet[2579]: I0113 21:27:46.260069 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:27:46.261749 kubelet[2579]: I0113 21:27:46.261723 2579 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:27:46.262603 kubelet[2579]: I0113 21:27:46.262467 2579 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:27:46.263206 kubelet[2579]: I0113 21:27:46.263008 2579 server.go:1269] "Started kubelet" Jan 13 21:27:46.269871 kubelet[2579]: I0113 21:27:46.269455 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:27:46.280003 kubelet[2579]: I0113 21:27:46.279950 2579 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:27:46.282072 kubelet[2579]: I0113 21:27:46.281318 2579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:27:46.282072 kubelet[2579]: I0113 21:27:46.281677 2579 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:27:46.287018 kubelet[2579]: I0113 21:27:46.286988 2579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:27:46.293072 kubelet[2579]: I0113 21:27:46.292187 2579 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:27:46.293072 kubelet[2579]: E0113 21:27:46.292527 2579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" not found" Jan 13 21:27:46.293750 kubelet[2579]: I0113 21:27:46.293729 2579 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:27:46.297188 kubelet[2579]: I0113 21:27:46.297162 2579 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:27:46.302343 kubelet[2579]: I0113 21:27:46.302323 2579 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:27:46.302561 kubelet[2579]: I0113 21:27:46.302536 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:27:46.304460 kubelet[2579]: I0113 21:27:46.304437 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:27:46.304602 kubelet[2579]: I0113 21:27:46.304587 2579 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:27:46.304715 kubelet[2579]: I0113 21:27:46.304700 2579 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:27:46.304857 kubelet[2579]: E0113 21:27:46.304833 2579 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:27:46.320365 kubelet[2579]: I0113 21:27:46.318792 2579 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:27:46.320365 kubelet[2579]: I0113 21:27:46.318820 2579 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:27:46.320365 kubelet[2579]: I0113 21:27:46.318911 2579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:27:46.321354 kubelet[2579]: E0113 21:27:46.321320 2579 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:27:46.383537 kubelet[2579]: I0113 21:27:46.383403 2579 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:27:46.383537 kubelet[2579]: I0113 21:27:46.383434 2579 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:27:46.383537 kubelet[2579]: I0113 21:27:46.383459 2579 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:27:46.384745 kubelet[2579]: I0113 21:27:46.384708 2579 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:27:46.384944 kubelet[2579]: I0113 21:27:46.384741 2579 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:27:46.384944 kubelet[2579]: I0113 21:27:46.384767 2579 policy_none.go:49] "None policy: Start" Jan 13 21:27:46.386268 kubelet[2579]: I0113 21:27:46.386207 2579 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:27:46.386268 kubelet[2579]: I0113 21:27:46.386244 2579 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:27:46.386606 kubelet[2579]: I0113 21:27:46.386488 2579 state_mem.go:75] "Updated machine memory state" Jan 13 21:27:46.401175 kubelet[2579]: I0113 21:27:46.401037 2579 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:27:46.401327 kubelet[2579]: I0113 21:27:46.401311 2579 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:27:46.401396 kubelet[2579]: I0113 21:27:46.401328 2579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:27:46.402653 kubelet[2579]: I0113 21:27:46.401861 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:27:46.421470 kubelet[2579]: W0113 21:27:46.421429 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:27:46.427479 kubelet[2579]: W0113 21:27:46.425672 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:27:46.427604 kubelet[2579]: W0113 21:27:46.427572 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:27:46.503275 kubelet[2579]: I0113 21:27:46.503136 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5cd67c9117fd9a22916ce93212dcb4b-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"b5cd67c9117fd9a22916ce93212dcb4b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.503275 kubelet[2579]: I0113 21:27:46.503200 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5cd67c9117fd9a22916ce93212dcb4b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"b5cd67c9117fd9a22916ce93212dcb4b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.503275 kubelet[2579]: I0113 21:27:46.503237 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cfed03fb45fe0965b783a0cc3dc0a804-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"cfed03fb45fe0965b783a0cc3dc0a804\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.503275 kubelet[2579]: I0113 21:27:46.503271 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfed03fb45fe0965b783a0cc3dc0a804-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"cfed03fb45fe0965b783a0cc3dc0a804\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.503613 kubelet[2579]: I0113 21:27:46.503302 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfed03fb45fe0965b783a0cc3dc0a804-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"cfed03fb45fe0965b783a0cc3dc0a804\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.503613 kubelet[2579]: I0113 21:27:46.503330 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5cd67c9117fd9a22916ce93212dcb4b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"b5cd67c9117fd9a22916ce93212dcb4b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.503613 kubelet[2579]: I0113 21:27:46.503356 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfed03fb45fe0965b783a0cc3dc0a804-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"cfed03fb45fe0965b783a0cc3dc0a804\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.503613 kubelet[2579]: I0113 21:27:46.503397 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cfed03fb45fe0965b783a0cc3dc0a804-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"cfed03fb45fe0965b783a0cc3dc0a804\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.503808 kubelet[2579]: I0113 21:27:46.503428 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1423c8f3ffa81c766af791de15562452-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" (UID: \"1423c8f3ffa81c766af791de15562452\") " pod="kube-system/kube-scheduler-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.521454 kubelet[2579]: I0113 21:27:46.521328 2579 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.531247 kubelet[2579]: I0113 21:27:46.530535 2579 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.531247 kubelet[2579]: I0113 21:27:46.530634 2579 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:46.638369 update_engine[1458]: I20250113 21:27:46.637328 1458 update_attempter.cc:509] Updating boot flags... Jan 13 21:27:46.716711 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2625) Jan 13 21:27:46.865265 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2628) Jan 13 21:27:47.261193 kubelet[2579]: I0113 21:27:47.261143 2579 apiserver.go:52] "Watching apiserver" Jan 13 21:27:47.297945 kubelet[2579]: I0113 21:27:47.297905 2579 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:27:47.363216 kubelet[2579]: W0113 21:27:47.363168 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:27:47.364015 kubelet[2579]: E0113 21:27:47.363981 2579 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:27:47.403064 kubelet[2579]: I0113 21:27:47.402950 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" podStartSLOduration=1.402924754 podStartE2EDuration="1.402924754s" podCreationTimestamp="2025-01-13 21:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:27:47.390916314 +0000 UTC m=+1.247219365" watchObservedRunningTime="2025-01-13 21:27:47.402924754 +0000 UTC m=+1.259227793" Jan 13 21:27:47.404107 kubelet[2579]: I0113 21:27:47.404008 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" podStartSLOduration=1.403989819 podStartE2EDuration="1.403989819s" podCreationTimestamp="2025-01-13 21:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:27:47.402249493 +0000 UTC m=+1.258552542" watchObservedRunningTime="2025-01-13 21:27:47.403989819 +0000 UTC m=+1.260292867" Jan 13 21:27:47.443524 kubelet[2579]: I0113 21:27:47.443451 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" podStartSLOduration=1.443425038 podStartE2EDuration="1.443425038s" podCreationTimestamp="2025-01-13 21:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:27:47.430650804 +0000 UTC m=+1.286953853" watchObservedRunningTime="2025-01-13 21:27:47.443425038 +0000 UTC m=+1.299728080" Jan 13 21:27:50.771535 kubelet[2579]: I0113 21:27:50.771418 2579 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:27:50.772670 containerd[1471]: time="2025-01-13T21:27:50.771894688Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:27:50.773306 kubelet[2579]: I0113 21:27:50.773202 2579 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:27:50.851107 systemd[1]: Created slice kubepods-besteffort-pod39aacdd2_8616_4b5c_97b4_c05af98e8c63.slice - libcontainer container kubepods-besteffort-pod39aacdd2_8616_4b5c_97b4_c05af98e8c63.slice. Jan 13 21:27:50.934754 kubelet[2579]: I0113 21:27:50.934522 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39aacdd2-8616-4b5c-97b4-c05af98e8c63-kube-proxy\") pod \"kube-proxy-xtfs2\" (UID: \"39aacdd2-8616-4b5c-97b4-c05af98e8c63\") " pod="kube-system/kube-proxy-xtfs2" Jan 13 21:27:50.934754 kubelet[2579]: I0113 21:27:50.934585 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzfth\" (UniqueName: \"kubernetes.io/projected/39aacdd2-8616-4b5c-97b4-c05af98e8c63-kube-api-access-dzfth\") pod \"kube-proxy-xtfs2\" (UID: \"39aacdd2-8616-4b5c-97b4-c05af98e8c63\") " pod="kube-system/kube-proxy-xtfs2" Jan 13 21:27:50.934754 kubelet[2579]: I0113 21:27:50.934667 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39aacdd2-8616-4b5c-97b4-c05af98e8c63-xtables-lock\") pod \"kube-proxy-xtfs2\" (UID: \"39aacdd2-8616-4b5c-97b4-c05af98e8c63\") " pod="kube-system/kube-proxy-xtfs2" Jan 13 21:27:50.934754 kubelet[2579]: I0113 21:27:50.934699 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39aacdd2-8616-4b5c-97b4-c05af98e8c63-lib-modules\") pod \"kube-proxy-xtfs2\" (UID: \"39aacdd2-8616-4b5c-97b4-c05af98e8c63\") " pod="kube-system/kube-proxy-xtfs2" Jan 13 21:27:51.043497 kubelet[2579]: E0113 21:27:51.043070 2579 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 21:27:51.043497 kubelet[2579]: E0113 21:27:51.043128 2579 projected.go:194] Error preparing data for projected volume kube-api-access-dzfth for pod kube-system/kube-proxy-xtfs2: configmap "kube-root-ca.crt" not found Jan 13 21:27:51.043497 kubelet[2579]: E0113 21:27:51.043221 2579 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39aacdd2-8616-4b5c-97b4-c05af98e8c63-kube-api-access-dzfth podName:39aacdd2-8616-4b5c-97b4-c05af98e8c63 nodeName:}" failed. No retries permitted until 2025-01-13 21:27:51.543191775 +0000 UTC m=+5.399494818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dzfth" (UniqueName: "kubernetes.io/projected/39aacdd2-8616-4b5c-97b4-c05af98e8c63-kube-api-access-dzfth") pod "kube-proxy-xtfs2" (UID: "39aacdd2-8616-4b5c-97b4-c05af98e8c63") : configmap "kube-root-ca.crt" not found Jan 13 21:27:51.760252 containerd[1471]: time="2025-01-13T21:27:51.760164219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xtfs2,Uid:39aacdd2-8616-4b5c-97b4-c05af98e8c63,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:51.803129 containerd[1471]: time="2025-01-13T21:27:51.802427033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:51.803129 containerd[1471]: time="2025-01-13T21:27:51.802503714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:51.803129 containerd[1471]: time="2025-01-13T21:27:51.802529384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:51.803806 containerd[1471]: time="2025-01-13T21:27:51.803163730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:51.859263 systemd[1]: Started cri-containerd-25cf04f0b8dc552a42e3ce532dbe8b0718fac24b30e16e2fef01ab83ca70b5d5.scope - libcontainer container 25cf04f0b8dc552a42e3ce532dbe8b0718fac24b30e16e2fef01ab83ca70b5d5. Jan 13 21:27:51.910978 systemd[1]: Created slice kubepods-besteffort-pod49b1cc2b_2d53_443b_ab69_2a1e9dbc20a1.slice - libcontainer container kubepods-besteffort-pod49b1cc2b_2d53_443b_ab69_2a1e9dbc20a1.slice. Jan 13 21:27:51.943893 kubelet[2579]: I0113 21:27:51.943762 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksn8x\" (UniqueName: \"kubernetes.io/projected/49b1cc2b-2d53-443b-ab69-2a1e9dbc20a1-kube-api-access-ksn8x\") pod \"tigera-operator-76c4976dd7-xz2rv\" (UID: \"49b1cc2b-2d53-443b-ab69-2a1e9dbc20a1\") " pod="tigera-operator/tigera-operator-76c4976dd7-xz2rv" Jan 13 21:27:51.943893 kubelet[2579]: I0113 21:27:51.943821 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49b1cc2b-2d53-443b-ab69-2a1e9dbc20a1-var-lib-calico\") pod \"tigera-operator-76c4976dd7-xz2rv\" (UID: \"49b1cc2b-2d53-443b-ab69-2a1e9dbc20a1\") " pod="tigera-operator/tigera-operator-76c4976dd7-xz2rv" Jan 13 21:27:52.018972 containerd[1471]: time="2025-01-13T21:27:52.018525766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xtfs2,Uid:39aacdd2-8616-4b5c-97b4-c05af98e8c63,Namespace:kube-system,Attempt:0,} returns sandbox id \"25cf04f0b8dc552a42e3ce532dbe8b0718fac24b30e16e2fef01ab83ca70b5d5\"" Jan 13 21:27:52.023144 containerd[1471]: time="2025-01-13T21:27:52.023100860Z" level=info msg="CreateContainer within sandbox \"25cf04f0b8dc552a42e3ce532dbe8b0718fac24b30e16e2fef01ab83ca70b5d5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:27:52.044909 containerd[1471]: time="2025-01-13T21:27:52.044755416Z" level=info msg="CreateContainer within sandbox \"25cf04f0b8dc552a42e3ce532dbe8b0718fac24b30e16e2fef01ab83ca70b5d5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"761bcb5d68eb6a4c45f1ad6b6a7c1e9a95759c5dc00de403c64c3caea2ad0ee2\"" Jan 13 21:27:52.047180 containerd[1471]: time="2025-01-13T21:27:52.045736219Z" level=info msg="StartContainer for \"761bcb5d68eb6a4c45f1ad6b6a7c1e9a95759c5dc00de403c64c3caea2ad0ee2\"" Jan 13 21:27:52.084390 systemd[1]: Started cri-containerd-761bcb5d68eb6a4c45f1ad6b6a7c1e9a95759c5dc00de403c64c3caea2ad0ee2.scope - libcontainer container 761bcb5d68eb6a4c45f1ad6b6a7c1e9a95759c5dc00de403c64c3caea2ad0ee2. Jan 13 21:27:52.125264 containerd[1471]: time="2025-01-13T21:27:52.125200959Z" level=info msg="StartContainer for \"761bcb5d68eb6a4c45f1ad6b6a7c1e9a95759c5dc00de403c64c3caea2ad0ee2\" returns successfully" Jan 13 21:27:52.216311 containerd[1471]: time="2025-01-13T21:27:52.216196149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-xz2rv,Uid:49b1cc2b-2d53-443b-ab69-2a1e9dbc20a1,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:27:52.253185 containerd[1471]: time="2025-01-13T21:27:52.252744158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:52.253185 containerd[1471]: time="2025-01-13T21:27:52.252823713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:52.253185 containerd[1471]: time="2025-01-13T21:27:52.252853121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:52.253185 containerd[1471]: time="2025-01-13T21:27:52.252991896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:52.296289 systemd[1]: Started cri-containerd-e4728b0989cbe2e7f4887b0ef0b9d8e4dc1425abf52fbd35223369109e495a9a.scope - libcontainer container e4728b0989cbe2e7f4887b0ef0b9d8e4dc1425abf52fbd35223369109e495a9a. Jan 13 21:27:52.465651 containerd[1471]: time="2025-01-13T21:27:52.465576489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-xz2rv,Uid:49b1cc2b-2d53-443b-ab69-2a1e9dbc20a1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e4728b0989cbe2e7f4887b0ef0b9d8e4dc1425abf52fbd35223369109e495a9a\"" Jan 13 21:27:52.467995 containerd[1471]: time="2025-01-13T21:27:52.467918007Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:27:52.509992 sudo[1744]: pam_unix(sudo:session): session closed for user root Jan 13 21:27:52.554289 sshd[1741]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:52.561446 systemd[1]: sshd@8-10.128.0.96:22-147.75.109.163:47964.service: Deactivated successfully. Jan 13 21:27:52.563841 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:27:52.566282 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:27:52.566578 systemd[1]: session-9.scope: Consumed 6.862s CPU time, 158.6M memory peak, 0B memory swap peak. Jan 13 21:27:52.570173 systemd-logind[1454]: Removed session 9. Jan 13 21:27:52.661030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3324525543.mount: Deactivated successfully. Jan 13 21:27:53.662491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863272370.mount: Deactivated successfully. Jan 13 21:27:54.385404 containerd[1471]: time="2025-01-13T21:27:54.385341487Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:54.386705 containerd[1471]: time="2025-01-13T21:27:54.386634814Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763725" Jan 13 21:27:54.388178 containerd[1471]: time="2025-01-13T21:27:54.388140599Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:54.392926 containerd[1471]: time="2025-01-13T21:27:54.392859345Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:54.394592 containerd[1471]: time="2025-01-13T21:27:54.393986608Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.926008648s" Jan 13 21:27:54.394592 containerd[1471]: time="2025-01-13T21:27:54.394032987Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 21:27:54.397242 containerd[1471]: time="2025-01-13T21:27:54.397190700Z" level=info msg="CreateContainer within sandbox \"e4728b0989cbe2e7f4887b0ef0b9d8e4dc1425abf52fbd35223369109e495a9a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:27:54.419485 containerd[1471]: time="2025-01-13T21:27:54.419417289Z" level=info msg="CreateContainer within sandbox \"e4728b0989cbe2e7f4887b0ef0b9d8e4dc1425abf52fbd35223369109e495a9a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"efafec2e4b812c7b3c4196c9a89d1cabf0f8e68190060326c85e8922aa376aac\"" Jan 13 21:27:54.420256 containerd[1471]: time="2025-01-13T21:27:54.420208079Z" level=info msg="StartContainer for \"efafec2e4b812c7b3c4196c9a89d1cabf0f8e68190060326c85e8922aa376aac\"" Jan 13 21:27:54.468292 systemd[1]: Started cri-containerd-efafec2e4b812c7b3c4196c9a89d1cabf0f8e68190060326c85e8922aa376aac.scope - libcontainer container efafec2e4b812c7b3c4196c9a89d1cabf0f8e68190060326c85e8922aa376aac. Jan 13 21:27:54.502863 containerd[1471]: time="2025-01-13T21:27:54.502625381Z" level=info msg="StartContainer for \"efafec2e4b812c7b3c4196c9a89d1cabf0f8e68190060326c85e8922aa376aac\" returns successfully" Jan 13 21:27:54.701716 kubelet[2579]: I0113 21:27:54.701587 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xtfs2" podStartSLOduration=4.701559924 podStartE2EDuration="4.701559924s" podCreationTimestamp="2025-01-13 21:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:27:52.388001548 +0000 UTC m=+6.244304595" watchObservedRunningTime="2025-01-13 21:27:54.701559924 +0000 UTC m=+8.557862973" Jan 13 21:27:55.447929 kubelet[2579]: I0113 21:27:55.447834 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-xz2rv" podStartSLOduration=2.519593582 podStartE2EDuration="4.447807999s" podCreationTimestamp="2025-01-13 21:27:51 +0000 UTC" firstStartedPulling="2025-01-13 21:27:52.467100961 +0000 UTC m=+6.323403996" lastFinishedPulling="2025-01-13 21:27:54.395315375 +0000 UTC m=+8.251618413" observedRunningTime="2025-01-13 21:27:55.397684972 +0000 UTC m=+9.253988016" watchObservedRunningTime="2025-01-13 21:27:55.447807999 +0000 UTC m=+9.304111047" Jan 13 21:27:57.681770 systemd[1]: Created slice kubepods-besteffort-pod09c18b38_b09c_420b_a074_05d450623c8b.slice - libcontainer container kubepods-besteffort-pod09c18b38_b09c_420b_a074_05d450623c8b.slice. Jan 13 21:27:57.686780 kubelet[2579]: I0113 21:27:57.686719 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09c18b38-b09c-420b-a074-05d450623c8b-tigera-ca-bundle\") pod \"calico-typha-b7959dd4d-7hwb2\" (UID: \"09c18b38-b09c-420b-a074-05d450623c8b\") " pod="calico-system/calico-typha-b7959dd4d-7hwb2" Jan 13 21:27:57.686780 kubelet[2579]: I0113 21:27:57.686777 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/09c18b38-b09c-420b-a074-05d450623c8b-typha-certs\") pod \"calico-typha-b7959dd4d-7hwb2\" (UID: \"09c18b38-b09c-420b-a074-05d450623c8b\") " pod="calico-system/calico-typha-b7959dd4d-7hwb2" Jan 13 21:27:57.687805 kubelet[2579]: I0113 21:27:57.686807 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78jm9\" (UniqueName: \"kubernetes.io/projected/09c18b38-b09c-420b-a074-05d450623c8b-kube-api-access-78jm9\") pod \"calico-typha-b7959dd4d-7hwb2\" (UID: \"09c18b38-b09c-420b-a074-05d450623c8b\") " pod="calico-system/calico-typha-b7959dd4d-7hwb2" Jan 13 21:27:57.824737 systemd[1]: Created slice kubepods-besteffort-pode1a8bbc5_a618_4f66_bc97_75cba6e7a034.slice - libcontainer container kubepods-besteffort-pode1a8bbc5_a618_4f66_bc97_75cba6e7a034.slice. Jan 13 21:27:57.888721 kubelet[2579]: I0113 21:27:57.888668 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-lib-modules\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.888912 kubelet[2579]: I0113 21:27:57.888735 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-flexvol-driver-host\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.888912 kubelet[2579]: I0113 21:27:57.888780 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-cni-bin-dir\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.888912 kubelet[2579]: I0113 21:27:57.888812 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-tigera-ca-bundle\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.888912 kubelet[2579]: I0113 21:27:57.888837 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-cni-log-dir\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.888912 kubelet[2579]: I0113 21:27:57.888863 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-cni-net-dir\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.889251 kubelet[2579]: I0113 21:27:57.888892 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88m8b\" (UniqueName: \"kubernetes.io/projected/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-kube-api-access-88m8b\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.889251 kubelet[2579]: I0113 21:27:57.888923 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-var-lib-calico\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.889251 kubelet[2579]: I0113 21:27:57.888953 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-policysync\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.889251 kubelet[2579]: I0113 21:27:57.888979 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-xtables-lock\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.889251 kubelet[2579]: I0113 21:27:57.889008 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-node-certs\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.889791 kubelet[2579]: I0113 21:27:57.889036 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e1a8bbc5-a618-4f66-bc97-75cba6e7a034-var-run-calico\") pod \"calico-node-sw2cs\" (UID: \"e1a8bbc5-a618-4f66-bc97-75cba6e7a034\") " pod="calico-system/calico-node-sw2cs" Jan 13 21:27:57.908185 kubelet[2579]: E0113 21:27:57.908083 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76vfx" podUID="1eede6f8-94e3-4a63-bb4e-723906a70abc" Jan 13 21:27:57.990795 kubelet[2579]: I0113 21:27:57.989295 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1eede6f8-94e3-4a63-bb4e-723906a70abc-varrun\") pod \"csi-node-driver-76vfx\" (UID: \"1eede6f8-94e3-4a63-bb4e-723906a70abc\") " pod="calico-system/csi-node-driver-76vfx" Jan 13 21:27:57.990795 kubelet[2579]: I0113 21:27:57.989346 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1eede6f8-94e3-4a63-bb4e-723906a70abc-socket-dir\") pod \"csi-node-driver-76vfx\" (UID: \"1eede6f8-94e3-4a63-bb4e-723906a70abc\") " pod="calico-system/csi-node-driver-76vfx" Jan 13 21:27:57.990795 kubelet[2579]: I0113 21:27:57.989411 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96zr9\" (UniqueName: \"kubernetes.io/projected/1eede6f8-94e3-4a63-bb4e-723906a70abc-kube-api-access-96zr9\") pod \"csi-node-driver-76vfx\" (UID: \"1eede6f8-94e3-4a63-bb4e-723906a70abc\") " pod="calico-system/csi-node-driver-76vfx" Jan 13 21:27:57.990795 kubelet[2579]: I0113 21:27:57.989512 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1eede6f8-94e3-4a63-bb4e-723906a70abc-registration-dir\") pod \"csi-node-driver-76vfx\" (UID: \"1eede6f8-94e3-4a63-bb4e-723906a70abc\") " pod="calico-system/csi-node-driver-76vfx" Jan 13 21:27:57.990795 kubelet[2579]: I0113 21:27:57.989556 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1eede6f8-94e3-4a63-bb4e-723906a70abc-kubelet-dir\") pod \"csi-node-driver-76vfx\" (UID: \"1eede6f8-94e3-4a63-bb4e-723906a70abc\") " pod="calico-system/csi-node-driver-76vfx" Jan 13 21:27:57.991178 containerd[1471]: time="2025-01-13T21:27:57.990378371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b7959dd4d-7hwb2,Uid:09c18b38-b09c-420b-a074-05d450623c8b,Namespace:calico-system,Attempt:0,}" Jan 13 21:27:57.997453 kubelet[2579]: E0113 21:27:57.997413 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:57.997453 kubelet[2579]: W0113 21:27:57.997450 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:57.997809 kubelet[2579]: E0113 21:27:57.997596 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.000906 kubelet[2579]: E0113 21:27:58.000874 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.000906 kubelet[2579]: W0113 21:27:58.000903 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.001191 kubelet[2579]: E0113 21:27:58.001128 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.002083 kubelet[2579]: E0113 21:27:58.001749 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.002338 kubelet[2579]: W0113 21:27:58.002221 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.002338 kubelet[2579]: E0113 21:27:58.002259 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.004002 kubelet[2579]: E0113 21:27:58.003255 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.004002 kubelet[2579]: W0113 21:27:58.003276 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.004281 kubelet[2579]: E0113 21:27:58.004108 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.010740 kubelet[2579]: E0113 21:27:58.010476 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.010740 kubelet[2579]: W0113 21:27:58.010499 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.010740 kubelet[2579]: E0113 21:27:58.010523 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.016845 kubelet[2579]: E0113 21:27:58.015121 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.016845 kubelet[2579]: W0113 21:27:58.015146 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.016845 kubelet[2579]: E0113 21:27:58.016691 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.019269 kubelet[2579]: E0113 21:27:58.017026 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.025266 kubelet[2579]: W0113 21:27:58.017041 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.025266 kubelet[2579]: E0113 21:27:58.019453 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.025266 kubelet[2579]: E0113 21:27:58.025178 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.025266 kubelet[2579]: W0113 21:27:58.025195 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.054092 kubelet[2579]: E0113 21:27:58.050289 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.054092 kubelet[2579]: W0113 21:27:58.050357 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.054092 kubelet[2579]: E0113 21:27:58.050390 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.054092 kubelet[2579]: E0113 21:27:58.050472 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.090119 kubelet[2579]: E0113 21:27:58.089375 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.090119 kubelet[2579]: W0113 21:27:58.089401 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.090119 kubelet[2579]: E0113 21:27:58.089426 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.092258 kubelet[2579]: E0113 21:27:58.091272 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.092258 kubelet[2579]: W0113 21:27:58.091292 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.092258 kubelet[2579]: E0113 21:27:58.091316 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.093193 kubelet[2579]: E0113 21:27:58.092778 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.093193 kubelet[2579]: W0113 21:27:58.092799 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.093193 kubelet[2579]: E0113 21:27:58.092842 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.094387 kubelet[2579]: E0113 21:27:58.094225 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.094387 kubelet[2579]: W0113 21:27:58.094244 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.094387 kubelet[2579]: E0113 21:27:58.094283 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.096484 kubelet[2579]: E0113 21:27:58.096168 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.096484 kubelet[2579]: W0113 21:27:58.096189 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.096484 kubelet[2579]: E0113 21:27:58.096224 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.096896 kubelet[2579]: E0113 21:27:58.096751 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.096896 kubelet[2579]: W0113 21:27:58.096767 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.096896 kubelet[2579]: E0113 21:27:58.096861 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.097504 kubelet[2579]: E0113 21:27:58.097325 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.097504 kubelet[2579]: W0113 21:27:58.097340 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.097504 kubelet[2579]: E0113 21:27:58.097449 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.099486 kubelet[2579]: E0113 21:27:58.098252 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.099486 kubelet[2579]: W0113 21:27:58.098270 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.099486 kubelet[2579]: E0113 21:27:58.099368 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.100408 kubelet[2579]: E0113 21:27:58.100103 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.100408 kubelet[2579]: W0113 21:27:58.100122 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.100408 kubelet[2579]: E0113 21:27:58.100321 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.102029 kubelet[2579]: E0113 21:27:58.101482 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.102029 kubelet[2579]: W0113 21:27:58.101500 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.102029 kubelet[2579]: E0113 21:27:58.101710 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.102833 kubelet[2579]: E0113 21:27:58.102669 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.102833 kubelet[2579]: W0113 21:27:58.102686 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.102833 kubelet[2579]: E0113 21:27:58.102794 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.104273 kubelet[2579]: E0113 21:27:58.103602 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.104273 kubelet[2579]: W0113 21:27:58.103618 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.104273 kubelet[2579]: E0113 21:27:58.104092 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.105265 kubelet[2579]: E0113 21:27:58.105115 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.105265 kubelet[2579]: W0113 21:27:58.105136 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.106083 kubelet[2579]: E0113 21:27:58.105431 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.106441 kubelet[2579]: E0113 21:27:58.106296 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.106441 kubelet[2579]: W0113 21:27:58.106323 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.107197 kubelet[2579]: E0113 21:27:58.107003 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.108075 kubelet[2579]: E0113 21:27:58.107957 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.108075 kubelet[2579]: W0113 21:27:58.107977 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.108503 kubelet[2579]: E0113 21:27:58.108313 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.108904 kubelet[2579]: E0113 21:27:58.108703 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.108904 kubelet[2579]: W0113 21:27:58.108719 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.109744 kubelet[2579]: E0113 21:27:58.109462 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.113411 kubelet[2579]: E0113 21:27:58.113189 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.113411 kubelet[2579]: W0113 21:27:58.113211 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.113411 kubelet[2579]: E0113 21:27:58.113369 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.113619 kubelet[2579]: E0113 21:27:58.113558 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.113619 kubelet[2579]: W0113 21:27:58.113572 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.113720 kubelet[2579]: E0113 21:27:58.113705 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.115765 containerd[1471]: time="2025-01-13T21:27:58.111254051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:58.115765 containerd[1471]: time="2025-01-13T21:27:58.111341637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:58.115765 containerd[1471]: time="2025-01-13T21:27:58.111369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:58.115765 containerd[1471]: time="2025-01-13T21:27:58.111500851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:58.116025 kubelet[2579]: E0113 21:27:58.115397 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.116025 kubelet[2579]: W0113 21:27:58.115420 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.116025 kubelet[2579]: E0113 21:27:58.115525 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.116025 kubelet[2579]: E0113 21:27:58.115921 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.116025 kubelet[2579]: W0113 21:27:58.115936 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.116571 kubelet[2579]: E0113 21:27:58.116071 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.116571 kubelet[2579]: E0113 21:27:58.116313 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.116571 kubelet[2579]: W0113 21:27:58.116326 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.116571 kubelet[2579]: E0113 21:27:58.116460 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.116765 kubelet[2579]: E0113 21:27:58.116651 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.116765 kubelet[2579]: W0113 21:27:58.116662 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.116924 kubelet[2579]: E0113 21:27:58.116804 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.118462 kubelet[2579]: E0113 21:27:58.117029 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.118462 kubelet[2579]: W0113 21:27:58.117073 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.119898 kubelet[2579]: E0113 21:27:58.119157 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.119898 kubelet[2579]: E0113 21:27:58.119705 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.119898 kubelet[2579]: W0113 21:27:58.119719 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.119898 kubelet[2579]: E0113 21:27:58.119741 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.119898 kubelet[2579]: E0113 21:27:58.120132 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.119898 kubelet[2579]: W0113 21:27:58.120148 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.119898 kubelet[2579]: E0113 21:27:58.120289 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.119898 kubelet[2579]: E0113 21:27:58.120496 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.119898 kubelet[2579]: W0113 21:27:58.120509 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.119898 kubelet[2579]: E0113 21:27:58.120523 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.133079 containerd[1471]: time="2025-01-13T21:27:58.131417866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sw2cs,Uid:e1a8bbc5-a618-4f66-bc97-75cba6e7a034,Namespace:calico-system,Attempt:0,}" Jan 13 21:27:58.157781 kubelet[2579]: E0113 21:27:58.157717 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:58.157781 kubelet[2579]: W0113 21:27:58.157778 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:58.157994 kubelet[2579]: E0113 21:27:58.157809 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:58.187492 systemd[1]: Started cri-containerd-544900cdce06a768d6849685b51ced5804013478a66aa592cd4efa962567aadb.scope - libcontainer container 544900cdce06a768d6849685b51ced5804013478a66aa592cd4efa962567aadb. Jan 13 21:27:58.222589 containerd[1471]: time="2025-01-13T21:27:58.222292637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:58.222589 containerd[1471]: time="2025-01-13T21:27:58.222509159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:58.223025 containerd[1471]: time="2025-01-13T21:27:58.222672020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:58.226820 containerd[1471]: time="2025-01-13T21:27:58.226589103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:58.267298 systemd[1]: Started cri-containerd-8614fae93dfca7a5ff692d9fa9f6c1b1eededa723742b2b89611601f2376ca0a.scope - libcontainer container 8614fae93dfca7a5ff692d9fa9f6c1b1eededa723742b2b89611601f2376ca0a. Jan 13 21:27:58.341615 containerd[1471]: time="2025-01-13T21:27:58.341532364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sw2cs,Uid:e1a8bbc5-a618-4f66-bc97-75cba6e7a034,Namespace:calico-system,Attempt:0,} returns sandbox id \"8614fae93dfca7a5ff692d9fa9f6c1b1eededa723742b2b89611601f2376ca0a\"" Jan 13 21:27:58.345273 containerd[1471]: time="2025-01-13T21:27:58.345041270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:27:58.402195 containerd[1471]: time="2025-01-13T21:27:58.401004548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b7959dd4d-7hwb2,Uid:09c18b38-b09c-420b-a074-05d450623c8b,Namespace:calico-system,Attempt:0,} returns sandbox id \"544900cdce06a768d6849685b51ced5804013478a66aa592cd4efa962567aadb\"" Jan 13 21:27:59.184080 kubelet[2579]: E0113 21:27:59.182595 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.184080 kubelet[2579]: W0113 21:27:59.182622 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.184080 kubelet[2579]: E0113 21:27:59.182836 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.184748 kubelet[2579]: E0113 21:27:59.184341 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.184748 kubelet[2579]: W0113 21:27:59.184590 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.184748 kubelet[2579]: E0113 21:27:59.184621 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.185897 kubelet[2579]: E0113 21:27:59.185868 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.185897 kubelet[2579]: W0113 21:27:59.185903 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.186133 kubelet[2579]: E0113 21:27:59.185923 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.188180 kubelet[2579]: E0113 21:27:59.188137 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.188180 kubelet[2579]: W0113 21:27:59.188177 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.188327 kubelet[2579]: E0113 21:27:59.188196 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.189944 kubelet[2579]: E0113 21:27:59.189903 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.189944 kubelet[2579]: W0113 21:27:59.189929 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.189944 kubelet[2579]: E0113 21:27:59.189948 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.191250 kubelet[2579]: E0113 21:27:59.191216 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.191250 kubelet[2579]: W0113 21:27:59.191236 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.191397 kubelet[2579]: E0113 21:27:59.191254 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.191764 kubelet[2579]: E0113 21:27:59.191734 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.191764 kubelet[2579]: W0113 21:27:59.191753 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.191958 kubelet[2579]: E0113 21:27:59.191772 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.192161 kubelet[2579]: E0113 21:27:59.192140 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.192161 kubelet[2579]: W0113 21:27:59.192158 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.192365 kubelet[2579]: E0113 21:27:59.192177 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.192496 kubelet[2579]: E0113 21:27:59.192477 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.192496 kubelet[2579]: W0113 21:27:59.192494 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.192622 kubelet[2579]: E0113 21:27:59.192509 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.192838 kubelet[2579]: E0113 21:27:59.192816 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.192838 kubelet[2579]: W0113 21:27:59.192835 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.192968 kubelet[2579]: E0113 21:27:59.192852 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.194106 kubelet[2579]: E0113 21:27:59.193294 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.194106 kubelet[2579]: W0113 21:27:59.193322 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.194106 kubelet[2579]: E0113 21:27:59.193338 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.196088 kubelet[2579]: E0113 21:27:59.196023 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.196088 kubelet[2579]: W0113 21:27:59.196043 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.196239 kubelet[2579]: E0113 21:27:59.196101 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.196699 kubelet[2579]: E0113 21:27:59.196503 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.196699 kubelet[2579]: W0113 21:27:59.196521 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.196699 kubelet[2579]: E0113 21:27:59.196537 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.197878 kubelet[2579]: E0113 21:27:59.197240 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.197878 kubelet[2579]: W0113 21:27:59.197259 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.197878 kubelet[2579]: E0113 21:27:59.197276 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.197878 kubelet[2579]: E0113 21:27:59.197736 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.197878 kubelet[2579]: W0113 21:27:59.197751 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.197878 kubelet[2579]: E0113 21:27:59.197767 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.198735 kubelet[2579]: E0113 21:27:59.198591 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.198735 kubelet[2579]: W0113 21:27:59.198611 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.198735 kubelet[2579]: E0113 21:27:59.198627 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.199792 kubelet[2579]: E0113 21:27:59.199768 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.199792 kubelet[2579]: W0113 21:27:59.199791 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.200108 kubelet[2579]: E0113 21:27:59.199807 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.200532 kubelet[2579]: E0113 21:27:59.200270 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.200532 kubelet[2579]: W0113 21:27:59.200286 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.200532 kubelet[2579]: E0113 21:27:59.200302 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.200711 kubelet[2579]: E0113 21:27:59.200618 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.200711 kubelet[2579]: W0113 21:27:59.200631 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.200711 kubelet[2579]: E0113 21:27:59.200646 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.201132 kubelet[2579]: E0113 21:27:59.200978 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.201132 kubelet[2579]: W0113 21:27:59.200996 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.201132 kubelet[2579]: E0113 21:27:59.201012 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.201417 kubelet[2579]: E0113 21:27:59.201391 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.201417 kubelet[2579]: W0113 21:27:59.201406 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.201715 kubelet[2579]: E0113 21:27:59.201423 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.201778 kubelet[2579]: E0113 21:27:59.201734 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.201778 kubelet[2579]: W0113 21:27:59.201748 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.201778 kubelet[2579]: E0113 21:27:59.201765 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.202131 kubelet[2579]: E0113 21:27:59.202109 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.202131 kubelet[2579]: W0113 21:27:59.202128 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.202265 kubelet[2579]: E0113 21:27:59.202144 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.202441 kubelet[2579]: E0113 21:27:59.202423 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.202441 kubelet[2579]: W0113 21:27:59.202439 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.202557 kubelet[2579]: E0113 21:27:59.202454 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.202787 kubelet[2579]: E0113 21:27:59.202767 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:59.202787 kubelet[2579]: W0113 21:27:59.202784 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:59.202914 kubelet[2579]: E0113 21:27:59.202800 2579 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:59.315564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578140252.mount: Deactivated successfully. Jan 13 21:27:59.471780 containerd[1471]: time="2025-01-13T21:27:59.471635726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:59.473957 containerd[1471]: time="2025-01-13T21:27:59.473891297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 21:27:59.476617 containerd[1471]: time="2025-01-13T21:27:59.474968426Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:59.478741 containerd[1471]: time="2025-01-13T21:27:59.478685803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:59.479816 containerd[1471]: time="2025-01-13T21:27:59.479637920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.134521168s" Jan 13 21:27:59.479816 containerd[1471]: time="2025-01-13T21:27:59.479684315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:27:59.482598 containerd[1471]: time="2025-01-13T21:27:59.481650404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:27:59.483938 containerd[1471]: time="2025-01-13T21:27:59.483888761Z" level=info msg="CreateContainer within sandbox \"8614fae93dfca7a5ff692d9fa9f6c1b1eededa723742b2b89611601f2376ca0a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:27:59.510388 containerd[1471]: time="2025-01-13T21:27:59.507667729Z" level=info msg="CreateContainer within sandbox \"8614fae93dfca7a5ff692d9fa9f6c1b1eededa723742b2b89611601f2376ca0a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"112224442b49d3a35fcd2c462fc747274dd2404a648d2aac96fff05857966183\"" Jan 13 21:27:59.514544 containerd[1471]: time="2025-01-13T21:27:59.514481326Z" level=info msg="StartContainer for \"112224442b49d3a35fcd2c462fc747274dd2404a648d2aac96fff05857966183\"" Jan 13 21:27:59.561279 systemd[1]: Started cri-containerd-112224442b49d3a35fcd2c462fc747274dd2404a648d2aac96fff05857966183.scope - libcontainer container 112224442b49d3a35fcd2c462fc747274dd2404a648d2aac96fff05857966183. Jan 13 21:27:59.601983 containerd[1471]: time="2025-01-13T21:27:59.601883996Z" level=info msg="StartContainer for \"112224442b49d3a35fcd2c462fc747274dd2404a648d2aac96fff05857966183\" returns successfully" Jan 13 21:27:59.624899 systemd[1]: cri-containerd-112224442b49d3a35fcd2c462fc747274dd2404a648d2aac96fff05857966183.scope: Deactivated successfully. Jan 13 21:27:59.806785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-112224442b49d3a35fcd2c462fc747274dd2404a648d2aac96fff05857966183-rootfs.mount: Deactivated successfully. Jan 13 21:27:59.981075 containerd[1471]: time="2025-01-13T21:27:59.980685337Z" level=info msg="shim disconnected" id=112224442b49d3a35fcd2c462fc747274dd2404a648d2aac96fff05857966183 namespace=k8s.io Jan 13 21:27:59.981075 containerd[1471]: time="2025-01-13T21:27:59.980768142Z" level=warning msg="cleaning up after shim disconnected" id=112224442b49d3a35fcd2c462fc747274dd2404a648d2aac96fff05857966183 namespace=k8s.io Jan 13 21:27:59.981075 containerd[1471]: time="2025-01-13T21:27:59.980783507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:28:00.306794 kubelet[2579]: E0113 21:28:00.305841 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76vfx" podUID="1eede6f8-94e3-4a63-bb4e-723906a70abc" Jan 13 21:28:01.428414 containerd[1471]: time="2025-01-13T21:28:01.428356550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:01.429637 containerd[1471]: time="2025-01-13T21:28:01.429515619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 13 21:28:01.430948 containerd[1471]: time="2025-01-13T21:28:01.430869081Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:01.433830 containerd[1471]: time="2025-01-13T21:28:01.433763816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:01.434952 containerd[1471]: time="2025-01-13T21:28:01.434743134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.953030115s" Jan 13 21:28:01.434952 containerd[1471]: time="2025-01-13T21:28:01.434789837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:28:01.436564 containerd[1471]: time="2025-01-13T21:28:01.436533206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:28:01.459614 containerd[1471]: time="2025-01-13T21:28:01.459174928Z" level=info msg="CreateContainer within sandbox \"544900cdce06a768d6849685b51ced5804013478a66aa592cd4efa962567aadb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:28:01.477945 containerd[1471]: time="2025-01-13T21:28:01.477888941Z" level=info msg="CreateContainer within sandbox \"544900cdce06a768d6849685b51ced5804013478a66aa592cd4efa962567aadb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6bbe862538a617a4ed2014e6295529f6c392942e2cb86831c6dd763004a5e384\"" Jan 13 21:28:01.479081 containerd[1471]: time="2025-01-13T21:28:01.478605437Z" level=info msg="StartContainer for \"6bbe862538a617a4ed2014e6295529f6c392942e2cb86831c6dd763004a5e384\"" Jan 13 21:28:01.529342 systemd[1]: Started cri-containerd-6bbe862538a617a4ed2014e6295529f6c392942e2cb86831c6dd763004a5e384.scope - libcontainer container 6bbe862538a617a4ed2014e6295529f6c392942e2cb86831c6dd763004a5e384. Jan 13 21:28:01.590647 containerd[1471]: time="2025-01-13T21:28:01.590584785Z" level=info msg="StartContainer for \"6bbe862538a617a4ed2014e6295529f6c392942e2cb86831c6dd763004a5e384\" returns successfully" Jan 13 21:28:02.306560 kubelet[2579]: E0113 21:28:02.306105 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76vfx" podUID="1eede6f8-94e3-4a63-bb4e-723906a70abc" Jan 13 21:28:02.425486 kubelet[2579]: I0113 21:28:02.424255 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b7959dd4d-7hwb2" podStartSLOduration=2.390883531 podStartE2EDuration="5.424108244s" podCreationTimestamp="2025-01-13 21:27:57 +0000 UTC" firstStartedPulling="2025-01-13 21:27:58.402949143 +0000 UTC m=+12.259252182" lastFinishedPulling="2025-01-13 21:28:01.436173847 +0000 UTC m=+15.292476895" observedRunningTime="2025-01-13 21:28:02.420748521 +0000 UTC m=+16.277051574" watchObservedRunningTime="2025-01-13 21:28:02.424108244 +0000 UTC m=+16.280411292" Jan 13 21:28:03.407623 kubelet[2579]: I0113 21:28:03.407583 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:28:04.305609 kubelet[2579]: E0113 21:28:04.305556 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76vfx" podUID="1eede6f8-94e3-4a63-bb4e-723906a70abc" Jan 13 21:28:05.365648 containerd[1471]: time="2025-01-13T21:28:05.365587654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:05.366830 containerd[1471]: time="2025-01-13T21:28:05.366769119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:28:05.368239 containerd[1471]: time="2025-01-13T21:28:05.368196674Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:05.372119 containerd[1471]: time="2025-01-13T21:28:05.371991041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:05.373475 containerd[1471]: time="2025-01-13T21:28:05.373266568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.936689466s" Jan 13 21:28:05.373475 containerd[1471]: time="2025-01-13T21:28:05.373313793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:28:05.377093 containerd[1471]: time="2025-01-13T21:28:05.376918154Z" level=info msg="CreateContainer within sandbox \"8614fae93dfca7a5ff692d9fa9f6c1b1eededa723742b2b89611601f2376ca0a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:28:05.398372 containerd[1471]: time="2025-01-13T21:28:05.398311741Z" level=info msg="CreateContainer within sandbox \"8614fae93dfca7a5ff692d9fa9f6c1b1eededa723742b2b89611601f2376ca0a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cae544d7f35460e6d23ae51a92b82e3c008fdabf84964a3f8e326334edb58f2e\"" Jan 13 21:28:05.399171 containerd[1471]: time="2025-01-13T21:28:05.399091019Z" level=info msg="StartContainer for \"cae544d7f35460e6d23ae51a92b82e3c008fdabf84964a3f8e326334edb58f2e\"" Jan 13 21:28:05.456274 systemd[1]: Started cri-containerd-cae544d7f35460e6d23ae51a92b82e3c008fdabf84964a3f8e326334edb58f2e.scope - libcontainer container cae544d7f35460e6d23ae51a92b82e3c008fdabf84964a3f8e326334edb58f2e. Jan 13 21:28:05.497004 containerd[1471]: time="2025-01-13T21:28:05.496521761Z" level=info msg="StartContainer for \"cae544d7f35460e6d23ae51a92b82e3c008fdabf84964a3f8e326334edb58f2e\" returns successfully" Jan 13 21:28:06.307118 kubelet[2579]: E0113 21:28:06.305562 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76vfx" podUID="1eede6f8-94e3-4a63-bb4e-723906a70abc" Jan 13 21:28:06.329912 containerd[1471]: time="2025-01-13T21:28:06.329856083Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:28:06.332458 systemd[1]: cri-containerd-cae544d7f35460e6d23ae51a92b82e3c008fdabf84964a3f8e326334edb58f2e.scope: Deactivated successfully. Jan 13 21:28:06.363796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cae544d7f35460e6d23ae51a92b82e3c008fdabf84964a3f8e326334edb58f2e-rootfs.mount: Deactivated successfully. Jan 13 21:28:06.440510 kubelet[2579]: I0113 21:28:06.440473 2579 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:28:06.490551 systemd[1]: Created slice kubepods-burstable-podb1bd5e1f_f801_477d_8fd5_44146cbed3de.slice - libcontainer container kubepods-burstable-podb1bd5e1f_f801_477d_8fd5_44146cbed3de.slice. Jan 13 21:28:06.517502 systemd[1]: Created slice kubepods-burstable-pod169d99a2_83db_446f_8f2b_e2938f3cb74a.slice - libcontainer container kubepods-burstable-pod169d99a2_83db_446f_8f2b_e2938f3cb74a.slice. Jan 13 21:28:06.532644 systemd[1]: Created slice kubepods-besteffort-podbd143408_05e7_4dc4_9e36_d11bd741a281.slice - libcontainer container kubepods-besteffort-podbd143408_05e7_4dc4_9e36_d11bd741a281.slice. Jan 13 21:28:06.544379 systemd[1]: Created slice kubepods-besteffort-pod4e0e803f_5d08_4bec_b2f7_1b57af2ab9b4.slice - libcontainer container kubepods-besteffort-pod4e0e803f_5d08_4bec_b2f7_1b57af2ab9b4.slice. Jan 13 21:28:06.555747 systemd[1]: Created slice kubepods-besteffort-podb0798d6b_8ec9_490c_a862_c6be718179f8.slice - libcontainer container kubepods-besteffort-podb0798d6b_8ec9_490c_a862_c6be718179f8.slice. Jan 13 21:28:06.573262 kubelet[2579]: I0113 21:28:06.573213 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1bd5e1f-f801-477d-8fd5-44146cbed3de-config-volume\") pod \"coredns-6f6b679f8f-d4lrn\" (UID: \"b1bd5e1f-f801-477d-8fd5-44146cbed3de\") " pod="kube-system/coredns-6f6b679f8f-d4lrn" Jan 13 21:28:06.573262 kubelet[2579]: I0113 21:28:06.573259 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd143408-05e7-4dc4-9e36-d11bd741a281-calico-apiserver-certs\") pod \"calico-apiserver-6d4c964784-mgmhs\" (UID: \"bd143408-05e7-4dc4-9e36-d11bd741a281\") " pod="calico-apiserver/calico-apiserver-6d4c964784-mgmhs" Jan 13 21:28:06.573541 kubelet[2579]: I0113 21:28:06.573292 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0798d6b-8ec9-490c-a862-c6be718179f8-tigera-ca-bundle\") pod \"calico-kube-controllers-7b795dcbb4-fq58p\" (UID: \"b0798d6b-8ec9-490c-a862-c6be718179f8\") " pod="calico-system/calico-kube-controllers-7b795dcbb4-fq58p" Jan 13 21:28:06.573541 kubelet[2579]: I0113 21:28:06.573321 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjlt\" (UniqueName: \"kubernetes.io/projected/169d99a2-83db-446f-8f2b-e2938f3cb74a-kube-api-access-8zjlt\") pod \"coredns-6f6b679f8f-vz9hf\" (UID: \"169d99a2-83db-446f-8f2b-e2938f3cb74a\") " pod="kube-system/coredns-6f6b679f8f-vz9hf" Jan 13 21:28:06.573541 kubelet[2579]: I0113 21:28:06.573348 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkcmp\" (UniqueName: \"kubernetes.io/projected/4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4-kube-api-access-xkcmp\") pod \"calico-apiserver-6d4c964784-84gdc\" (UID: \"4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4\") " pod="calico-apiserver/calico-apiserver-6d4c964784-84gdc" Jan 13 21:28:06.573541 kubelet[2579]: I0113 21:28:06.573384 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbpvk\" (UniqueName: \"kubernetes.io/projected/b1bd5e1f-f801-477d-8fd5-44146cbed3de-kube-api-access-nbpvk\") pod \"coredns-6f6b679f8f-d4lrn\" (UID: \"b1bd5e1f-f801-477d-8fd5-44146cbed3de\") " pod="kube-system/coredns-6f6b679f8f-d4lrn" Jan 13 21:28:06.573541 kubelet[2579]: I0113 21:28:06.573414 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4-calico-apiserver-certs\") pod \"calico-apiserver-6d4c964784-84gdc\" (UID: \"4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4\") " pod="calico-apiserver/calico-apiserver-6d4c964784-84gdc" Jan 13 21:28:06.573840 kubelet[2579]: I0113 21:28:06.573446 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-962db\" (UniqueName: \"kubernetes.io/projected/b0798d6b-8ec9-490c-a862-c6be718179f8-kube-api-access-962db\") pod \"calico-kube-controllers-7b795dcbb4-fq58p\" (UID: \"b0798d6b-8ec9-490c-a862-c6be718179f8\") " pod="calico-system/calico-kube-controllers-7b795dcbb4-fq58p" Jan 13 21:28:06.573840 kubelet[2579]: I0113 21:28:06.573481 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fw44\" (UniqueName: \"kubernetes.io/projected/bd143408-05e7-4dc4-9e36-d11bd741a281-kube-api-access-7fw44\") pod \"calico-apiserver-6d4c964784-mgmhs\" (UID: \"bd143408-05e7-4dc4-9e36-d11bd741a281\") " pod="calico-apiserver/calico-apiserver-6d4c964784-mgmhs" Jan 13 21:28:06.573840 kubelet[2579]: I0113 21:28:06.573512 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/169d99a2-83db-446f-8f2b-e2938f3cb74a-config-volume\") pod \"coredns-6f6b679f8f-vz9hf\" (UID: \"169d99a2-83db-446f-8f2b-e2938f3cb74a\") " pod="kube-system/coredns-6f6b679f8f-vz9hf" Jan 13 21:28:06.862597 containerd[1471]: time="2025-01-13T21:28:06.862120688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c964784-mgmhs,Uid:bd143408-05e7-4dc4-9e36-d11bd741a281,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:28:06.863860 containerd[1471]: time="2025-01-13T21:28:06.863701769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c964784-84gdc,Uid:4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:28:06.865098 containerd[1471]: time="2025-01-13T21:28:06.865023425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d4lrn,Uid:b1bd5e1f-f801-477d-8fd5-44146cbed3de,Namespace:kube-system,Attempt:0,}" Jan 13 21:28:06.865541 containerd[1471]: time="2025-01-13T21:28:06.865467947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vz9hf,Uid:169d99a2-83db-446f-8f2b-e2938f3cb74a,Namespace:kube-system,Attempt:0,}" Jan 13 21:28:06.867957 containerd[1471]: time="2025-01-13T21:28:06.867894431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b795dcbb4-fq58p,Uid:b0798d6b-8ec9-490c-a862-c6be718179f8,Namespace:calico-system,Attempt:0,}" Jan 13 21:28:07.401733 containerd[1471]: time="2025-01-13T21:28:07.401657396Z" level=info msg="shim disconnected" id=cae544d7f35460e6d23ae51a92b82e3c008fdabf84964a3f8e326334edb58f2e namespace=k8s.io Jan 13 21:28:07.401733 containerd[1471]: time="2025-01-13T21:28:07.401796460Z" level=warning msg="cleaning up after shim disconnected" id=cae544d7f35460e6d23ae51a92b82e3c008fdabf84964a3f8e326334edb58f2e namespace=k8s.io Jan 13 21:28:07.402303 containerd[1471]: time="2025-01-13T21:28:07.401818734Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:28:07.683586 containerd[1471]: time="2025-01-13T21:28:07.683436275Z" level=error msg="Failed to destroy network for sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.684945 containerd[1471]: time="2025-01-13T21:28:07.684242931Z" level=error msg="encountered an error cleaning up failed sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.684945 containerd[1471]: time="2025-01-13T21:28:07.684847251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c964784-84gdc,Uid:4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.686436 kubelet[2579]: E0113 21:28:07.685210 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.686436 kubelet[2579]: E0113 21:28:07.685314 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4c964784-84gdc" Jan 13 21:28:07.686436 kubelet[2579]: E0113 21:28:07.685348 2579 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4c964784-84gdc" Jan 13 21:28:07.687015 kubelet[2579]: E0113 21:28:07.685421 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d4c964784-84gdc_calico-apiserver(4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d4c964784-84gdc_calico-apiserver(4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4c964784-84gdc" podUID="4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4" Jan 13 21:28:07.702065 containerd[1471]: time="2025-01-13T21:28:07.701992145Z" level=error msg="Failed to destroy network for sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.703128 containerd[1471]: time="2025-01-13T21:28:07.702574417Z" level=error msg="encountered an error cleaning up failed sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.703460 containerd[1471]: time="2025-01-13T21:28:07.703308761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vz9hf,Uid:169d99a2-83db-446f-8f2b-e2938f3cb74a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.704031 kubelet[2579]: E0113 21:28:07.703941 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.704163 kubelet[2579]: E0113 21:28:07.704094 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vz9hf" Jan 13 21:28:07.704163 kubelet[2579]: E0113 21:28:07.704126 2579 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vz9hf" Jan 13 21:28:07.704301 kubelet[2579]: E0113 21:28:07.704187 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vz9hf_kube-system(169d99a2-83db-446f-8f2b-e2938f3cb74a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vz9hf_kube-system(169d99a2-83db-446f-8f2b-e2938f3cb74a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vz9hf" podUID="169d99a2-83db-446f-8f2b-e2938f3cb74a" Jan 13 21:28:07.718217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956-shm.mount: Deactivated successfully. Jan 13 21:28:07.728257 containerd[1471]: time="2025-01-13T21:28:07.728200896Z" level=error msg="Failed to destroy network for sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.728570 containerd[1471]: time="2025-01-13T21:28:07.728519149Z" level=error msg="Failed to destroy network for sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.729327 containerd[1471]: time="2025-01-13T21:28:07.728837100Z" level=error msg="encountered an error cleaning up failed sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.729327 containerd[1471]: time="2025-01-13T21:28:07.729142423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c964784-mgmhs,Uid:bd143408-05e7-4dc4-9e36-d11bd741a281,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.730171 kubelet[2579]: E0113 21:28:07.730119 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.730519 kubelet[2579]: E0113 21:28:07.730209 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4c964784-mgmhs" Jan 13 21:28:07.730519 kubelet[2579]: E0113 21:28:07.730242 2579 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4c964784-mgmhs" Jan 13 21:28:07.732469 kubelet[2579]: E0113 21:28:07.730370 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d4c964784-mgmhs_calico-apiserver(bd143408-05e7-4dc4-9e36-d11bd741a281)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d4c964784-mgmhs_calico-apiserver(bd143408-05e7-4dc4-9e36-d11bd741a281)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4c964784-mgmhs" podUID="bd143408-05e7-4dc4-9e36-d11bd741a281" Jan 13 21:28:07.732598 containerd[1471]: time="2025-01-13T21:28:07.730286183Z" level=error msg="encountered an error cleaning up failed sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.734159 containerd[1471]: time="2025-01-13T21:28:07.730640711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d4lrn,Uid:b1bd5e1f-f801-477d-8fd5-44146cbed3de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.734748 kubelet[2579]: E0113 21:28:07.734684 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.734859 kubelet[2579]: E0113 21:28:07.734751 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-d4lrn" Jan 13 21:28:07.734859 kubelet[2579]: E0113 21:28:07.734785 2579 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-d4lrn" Jan 13 21:28:07.734859 kubelet[2579]: E0113 21:28:07.734835 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-d4lrn_kube-system(b1bd5e1f-f801-477d-8fd5-44146cbed3de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-d4lrn_kube-system(b1bd5e1f-f801-477d-8fd5-44146cbed3de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-d4lrn" podUID="b1bd5e1f-f801-477d-8fd5-44146cbed3de" Jan 13 21:28:07.735187 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32-shm.mount: Deactivated successfully. Jan 13 21:28:07.735363 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02-shm.mount: Deactivated successfully. Jan 13 21:28:07.745303 containerd[1471]: time="2025-01-13T21:28:07.745241391Z" level=error msg="Failed to destroy network for sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.748750 containerd[1471]: time="2025-01-13T21:28:07.748681915Z" level=error msg="encountered an error cleaning up failed sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.748858 containerd[1471]: time="2025-01-13T21:28:07.748780278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b795dcbb4-fq58p,Uid:b0798d6b-8ec9-490c-a862-c6be718179f8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.749283 kubelet[2579]: E0113 21:28:07.749229 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:07.749408 kubelet[2579]: E0113 21:28:07.749320 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b795dcbb4-fq58p" Jan 13 21:28:07.749408 kubelet[2579]: E0113 21:28:07.749352 2579 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b795dcbb4-fq58p" Jan 13 21:28:07.749618 kubelet[2579]: E0113 21:28:07.749411 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b795dcbb4-fq58p_calico-system(b0798d6b-8ec9-490c-a862-c6be718179f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b795dcbb4-fq58p_calico-system(b0798d6b-8ec9-490c-a862-c6be718179f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b795dcbb4-fq58p" podUID="b0798d6b-8ec9-490c-a862-c6be718179f8" Jan 13 21:28:07.750491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101-shm.mount: Deactivated successfully. Jan 13 21:28:08.314252 systemd[1]: Created slice kubepods-besteffort-pod1eede6f8_94e3_4a63_bb4e_723906a70abc.slice - libcontainer container kubepods-besteffort-pod1eede6f8_94e3_4a63_bb4e_723906a70abc.slice. Jan 13 21:28:08.318550 containerd[1471]: time="2025-01-13T21:28:08.318329299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76vfx,Uid:1eede6f8-94e3-4a63-bb4e-723906a70abc,Namespace:calico-system,Attempt:0,}" Jan 13 21:28:08.396388 containerd[1471]: time="2025-01-13T21:28:08.396322626Z" level=error msg="Failed to destroy network for sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:08.396802 containerd[1471]: time="2025-01-13T21:28:08.396757912Z" level=error msg="encountered an error cleaning up failed sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:08.396916 containerd[1471]: time="2025-01-13T21:28:08.396844878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76vfx,Uid:1eede6f8-94e3-4a63-bb4e-723906a70abc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:08.397230 kubelet[2579]: E0113 21:28:08.397162 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:08.397341 kubelet[2579]: E0113 21:28:08.397244 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76vfx" Jan 13 21:28:08.397341 kubelet[2579]: E0113 21:28:08.397276 2579 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76vfx" Jan 13 21:28:08.397452 kubelet[2579]: E0113 21:28:08.397341 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76vfx_calico-system(1eede6f8-94e3-4a63-bb4e-723906a70abc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76vfx_calico-system(1eede6f8-94e3-4a63-bb4e-723906a70abc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76vfx" podUID="1eede6f8-94e3-4a63-bb4e-723906a70abc" Jan 13 21:28:08.425594 kubelet[2579]: I0113 21:28:08.425509 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:08.429128 containerd[1471]: time="2025-01-13T21:28:08.428675992Z" level=info msg="StopPodSandbox for \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\"" Jan 13 21:28:08.429128 containerd[1471]: time="2025-01-13T21:28:08.428924324Z" level=info msg="Ensure that sandbox 6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d in task-service has been cleanup successfully" Jan 13 21:28:08.439918 containerd[1471]: time="2025-01-13T21:28:08.439480116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:28:08.441977 kubelet[2579]: I0113 21:28:08.441443 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:08.443181 containerd[1471]: time="2025-01-13T21:28:08.443134662Z" level=info msg="StopPodSandbox for \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\"" Jan 13 21:28:08.444476 containerd[1471]: time="2025-01-13T21:28:08.444436221Z" level=info msg="Ensure that sandbox 8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9 in task-service has been cleanup successfully" Jan 13 21:28:08.449414 kubelet[2579]: I0113 21:28:08.449352 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:08.453977 containerd[1471]: time="2025-01-13T21:28:08.453472492Z" level=info msg="StopPodSandbox for \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\"" Jan 13 21:28:08.453977 containerd[1471]: time="2025-01-13T21:28:08.453684899Z" level=info msg="Ensure that sandbox aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956 in task-service has been cleanup successfully" Jan 13 21:28:08.462892 kubelet[2579]: I0113 21:28:08.462863 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:08.466905 containerd[1471]: time="2025-01-13T21:28:08.466860509Z" level=info msg="StopPodSandbox for \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\"" Jan 13 21:28:08.467172 containerd[1471]: time="2025-01-13T21:28:08.467138471Z" level=info msg="Ensure that sandbox acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02 in task-service has been cleanup successfully" Jan 13 21:28:08.469804 kubelet[2579]: I0113 21:28:08.469204 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:08.470230 containerd[1471]: time="2025-01-13T21:28:08.470187459Z" level=info msg="StopPodSandbox for \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\"" Jan 13 21:28:08.471018 containerd[1471]: time="2025-01-13T21:28:08.470984006Z" level=info msg="Ensure that sandbox ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101 in task-service has been cleanup successfully" Jan 13 21:28:08.476663 kubelet[2579]: I0113 21:28:08.476633 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:08.483265 containerd[1471]: time="2025-01-13T21:28:08.482652004Z" level=info msg="StopPodSandbox for \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\"" Jan 13 21:28:08.488894 containerd[1471]: time="2025-01-13T21:28:08.488826795Z" level=info msg="Ensure that sandbox 9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32 in task-service has been cleanup successfully" Jan 13 21:28:08.598377 containerd[1471]: time="2025-01-13T21:28:08.598228714Z" level=error msg="StopPodSandbox for \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\" failed" error="failed to destroy network for sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:08.600419 kubelet[2579]: E0113 21:28:08.600372 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:08.600786 kubelet[2579]: E0113 21:28:08.600708 2579 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9"} Jan 13 21:28:08.602071 kubelet[2579]: E0113 21:28:08.601023 2579 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1eede6f8-94e3-4a63-bb4e-723906a70abc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:28:08.602365 kubelet[2579]: E0113 21:28:08.602292 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1eede6f8-94e3-4a63-bb4e-723906a70abc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76vfx" podUID="1eede6f8-94e3-4a63-bb4e-723906a70abc" Jan 13 21:28:08.607627 containerd[1471]: time="2025-01-13T21:28:08.607180204Z" level=error msg="StopPodSandbox for \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\" failed" error="failed to destroy network for sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:08.607767 kubelet[2579]: E0113 21:28:08.607420 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:08.607767 kubelet[2579]: E0113 21:28:08.607464 2579 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d"} Jan 13 21:28:08.607767 kubelet[2579]: E0113 21:28:08.607507 2579 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:28:08.607767 kubelet[2579]: E0113 21:28:08.607548 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4c964784-84gdc" podUID="4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4" Jan 13 21:28:08.629701 containerd[1471]: time="2025-01-13T21:28:08.629563351Z" level=error msg="StopPodSandbox for \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\" failed" error="failed to destroy network for sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:08.629834 kubelet[2579]: E0113 21:28:08.629801 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:08.629907 kubelet[2579]: E0113 21:28:08.629851 2579 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32"} Jan 13 21:28:08.629969 kubelet[2579]: E0113 21:28:08.629909 2579 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1bd5e1f-f801-477d-8fd5-44146cbed3de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:28:08.629969 kubelet[2579]: E0113 21:28:08.629946 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1bd5e1f-f801-477d-8fd5-44146cbed3de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-d4lrn" podUID="b1bd5e1f-f801-477d-8fd5-44146cbed3de" Jan 13 21:28:08.631456 containerd[1471]: time="2025-01-13T21:28:08.631404134Z" level=error msg="StopPodSandbox for \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\" failed" error="failed to destroy network for sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:08.631696 containerd[1471]: time="2025-01-13T21:28:08.631474037Z" level=error msg="StopPodSandbox for \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\" failed" error="failed to destroy network for sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:08.632035 kubelet[2579]: E0113 21:28:08.631995 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:08.632169 kubelet[2579]: E0113 21:28:08.632072 2579 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101"} Jan 13 21:28:08.632169 kubelet[2579]: E0113 21:28:08.632116 2579 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0798d6b-8ec9-490c-a862-c6be718179f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:28:08.632169 kubelet[2579]: E0113 21:28:08.632148 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0798d6b-8ec9-490c-a862-c6be718179f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b795dcbb4-fq58p" podUID="b0798d6b-8ec9-490c-a862-c6be718179f8" Jan 13 21:28:08.632467 kubelet[2579]: E0113 21:28:08.632238 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:08.632467 kubelet[2579]: E0113 21:28:08.632271 2579 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956"} Jan 13 21:28:08.632467 kubelet[2579]: E0113 21:28:08.632330 2579 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"169d99a2-83db-446f-8f2b-e2938f3cb74a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:28:08.632467 kubelet[2579]: E0113 21:28:08.632360 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"169d99a2-83db-446f-8f2b-e2938f3cb74a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vz9hf" podUID="169d99a2-83db-446f-8f2b-e2938f3cb74a" Jan 13 21:28:08.633210 containerd[1471]: time="2025-01-13T21:28:08.633169152Z" level=error msg="StopPodSandbox for \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\" failed" error="failed to destroy network for sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:08.633452 kubelet[2579]: E0113 21:28:08.633389 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:08.633452 kubelet[2579]: E0113 21:28:08.633434 2579 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02"} Jan 13 21:28:08.633645 kubelet[2579]: E0113 21:28:08.633473 2579 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd143408-05e7-4dc4-9e36-d11bd741a281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:28:08.633645 kubelet[2579]: E0113 21:28:08.633503 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd143408-05e7-4dc4-9e36-d11bd741a281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4c964784-mgmhs" podUID="bd143408-05e7-4dc4-9e36-d11bd741a281" Jan 13 21:28:09.686804 kubelet[2579]: I0113 21:28:09.686759 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:28:15.003735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078192272.mount: Deactivated successfully. Jan 13 21:28:15.046350 containerd[1471]: time="2025-01-13T21:28:15.046280791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:15.047647 containerd[1471]: time="2025-01-13T21:28:15.047592511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:28:15.048874 containerd[1471]: time="2025-01-13T21:28:15.048759070Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:15.052018 containerd[1471]: time="2025-01-13T21:28:15.051936032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:15.053147 containerd[1471]: time="2025-01-13T21:28:15.052920342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.613071938s" Jan 13 21:28:15.053147 containerd[1471]: time="2025-01-13T21:28:15.052968795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:28:15.075253 containerd[1471]: time="2025-01-13T21:28:15.075204083Z" level=info msg="CreateContainer within sandbox \"8614fae93dfca7a5ff692d9fa9f6c1b1eededa723742b2b89611601f2376ca0a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:28:15.102533 containerd[1471]: time="2025-01-13T21:28:15.102472546Z" level=info msg="CreateContainer within sandbox \"8614fae93dfca7a5ff692d9fa9f6c1b1eededa723742b2b89611601f2376ca0a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8566683c18d614d6007843e45f762b555482cb027a22db4b60fa82d978d94b1e\"" Jan 13 21:28:15.103256 containerd[1471]: time="2025-01-13T21:28:15.103212881Z" level=info msg="StartContainer for \"8566683c18d614d6007843e45f762b555482cb027a22db4b60fa82d978d94b1e\"" Jan 13 21:28:15.149337 systemd[1]: Started cri-containerd-8566683c18d614d6007843e45f762b555482cb027a22db4b60fa82d978d94b1e.scope - libcontainer container 8566683c18d614d6007843e45f762b555482cb027a22db4b60fa82d978d94b1e. Jan 13 21:28:15.189576 containerd[1471]: time="2025-01-13T21:28:15.189394733Z" level=info msg="StartContainer for \"8566683c18d614d6007843e45f762b555482cb027a22db4b60fa82d978d94b1e\" returns successfully" Jan 13 21:28:15.290480 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:28:15.290671 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:28:17.102288 kernel: bpftool[3824]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:28:17.371566 systemd-networkd[1379]: vxlan.calico: Link UP Jan 13 21:28:17.371580 systemd-networkd[1379]: vxlan.calico: Gained carrier Jan 13 21:28:18.748249 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Jan 13 21:28:19.306374 containerd[1471]: time="2025-01-13T21:28:19.305893115Z" level=info msg="StopPodSandbox for \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\"" Jan 13 21:28:19.365615 kubelet[2579]: I0113 21:28:19.364939 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sw2cs" podStartSLOduration=5.654791843 podStartE2EDuration="22.364911607s" podCreationTimestamp="2025-01-13 21:27:57 +0000 UTC" firstStartedPulling="2025-01-13 21:27:58.344223688 +0000 UTC m=+12.200526722" lastFinishedPulling="2025-01-13 21:28:15.054343435 +0000 UTC m=+28.910646486" observedRunningTime="2025-01-13 21:28:15.52772014 +0000 UTC m=+29.384023189" watchObservedRunningTime="2025-01-13 21:28:19.364911607 +0000 UTC m=+33.221214657" Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.364 [INFO][3910] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.364 [INFO][3910] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" iface="eth0" netns="/var/run/netns/cni-801f89e2-a428-eb22-2028-a73878871c7d" Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.365 [INFO][3910] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" iface="eth0" netns="/var/run/netns/cni-801f89e2-a428-eb22-2028-a73878871c7d" Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.366 [INFO][3910] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" iface="eth0" netns="/var/run/netns/cni-801f89e2-a428-eb22-2028-a73878871c7d" Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.366 [INFO][3910] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.366 [INFO][3910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.393 [INFO][3916] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" HandleID="k8s-pod-network.aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.393 [INFO][3916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.393 [INFO][3916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.402 [WARNING][3916] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" HandleID="k8s-pod-network.aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.403 [INFO][3916] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" HandleID="k8s-pod-network.aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.405 [INFO][3916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:19.410005 containerd[1471]: 2025-01-13 21:28:19.408 [INFO][3910] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:19.411489 containerd[1471]: time="2025-01-13T21:28:19.410184736Z" level=info msg="TearDown network for sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\" successfully" Jan 13 21:28:19.411489 containerd[1471]: time="2025-01-13T21:28:19.410220874Z" level=info msg="StopPodSandbox for \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\" returns successfully" Jan 13 21:28:19.412505 containerd[1471]: time="2025-01-13T21:28:19.412441047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vz9hf,Uid:169d99a2-83db-446f-8f2b-e2938f3cb74a,Namespace:kube-system,Attempt:1,}" Jan 13 21:28:19.417537 systemd[1]: run-netns-cni\x2d801f89e2\x2da428\x2deb22\x2d2028\x2da73878871c7d.mount: Deactivated successfully. Jan 13 21:28:19.574094 systemd-networkd[1379]: cali787494e4002: Link UP Jan 13 21:28:19.577455 systemd-networkd[1379]: cali787494e4002: Gained carrier Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.483 [INFO][3922] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0 coredns-6f6b679f8f- kube-system 169d99a2-83db-446f-8f2b-e2938f3cb74a 726 0 2025-01-13 21:27:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal coredns-6f6b679f8f-vz9hf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali787494e4002 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" Namespace="kube-system" Pod="coredns-6f6b679f8f-vz9hf" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.483 [INFO][3922] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" Namespace="kube-system" Pod="coredns-6f6b679f8f-vz9hf" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.521 [INFO][3933] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" HandleID="k8s-pod-network.82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.534 [INFO][3933] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" HandleID="k8s-pod-network.82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee3d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", "pod":"coredns-6f6b679f8f-vz9hf", "timestamp":"2025-01-13 21:28:19.521098371 +0000 UTC"}, Hostname:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.534 [INFO][3933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.534 [INFO][3933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.534 [INFO][3933] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal' Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.536 [INFO][3933] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.541 [INFO][3933] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.546 [INFO][3933] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.548 [INFO][3933] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.551 [INFO][3933] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.551 [INFO][3933] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.553 [INFO][3933] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7 Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.559 [INFO][3933] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.566 [INFO][3933] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.193/26] block=192.168.32.192/26 handle="k8s-pod-network.82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.566 [INFO][3933] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.193/26] handle="k8s-pod-network.82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.566 [INFO][3933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:19.601911 containerd[1471]: 2025-01-13 21:28:19.566 [INFO][3933] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.193/26] IPv6=[] ContainerID="82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" HandleID="k8s-pod-network.82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:19.604610 containerd[1471]: 2025-01-13 21:28:19.568 [INFO][3922] cni-plugin/k8s.go 386: Populated endpoint ContainerID="82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" Namespace="kube-system" Pod="coredns-6f6b679f8f-vz9hf" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"169d99a2-83db-446f-8f2b-e2938f3cb74a", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-6f6b679f8f-vz9hf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali787494e4002", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:19.604610 containerd[1471]: 2025-01-13 21:28:19.569 [INFO][3922] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.193/32] ContainerID="82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" Namespace="kube-system" Pod="coredns-6f6b679f8f-vz9hf" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:19.604610 containerd[1471]: 2025-01-13 21:28:19.569 [INFO][3922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali787494e4002 ContainerID="82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" Namespace="kube-system" Pod="coredns-6f6b679f8f-vz9hf" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:19.604610 containerd[1471]: 2025-01-13 21:28:19.577 [INFO][3922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" Namespace="kube-system" Pod="coredns-6f6b679f8f-vz9hf" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:19.604610 containerd[1471]: 2025-01-13 21:28:19.579 [INFO][3922] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" Namespace="kube-system" Pod="coredns-6f6b679f8f-vz9hf" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"169d99a2-83db-446f-8f2b-e2938f3cb74a", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7", Pod:"coredns-6f6b679f8f-vz9hf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali787494e4002", MAC:"7e:b1:bd:e9:eb:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:19.604610 containerd[1471]: 2025-01-13 21:28:19.595 [INFO][3922] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7" Namespace="kube-system" Pod="coredns-6f6b679f8f-vz9hf" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:19.636883 containerd[1471]: time="2025-01-13T21:28:19.636677198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:19.636883 containerd[1471]: time="2025-01-13T21:28:19.636801482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:19.636883 containerd[1471]: time="2025-01-13T21:28:19.636828444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:19.637901 containerd[1471]: time="2025-01-13T21:28:19.637811279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:19.671013 systemd[1]: run-containerd-runc-k8s.io-82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7-runc.cC2GG5.mount: Deactivated successfully. Jan 13 21:28:19.680249 systemd[1]: Started cri-containerd-82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7.scope - libcontainer container 82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7. Jan 13 21:28:19.733767 containerd[1471]: time="2025-01-13T21:28:19.733629525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vz9hf,Uid:169d99a2-83db-446f-8f2b-e2938f3cb74a,Namespace:kube-system,Attempt:1,} returns sandbox id \"82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7\"" Jan 13 21:28:19.737156 containerd[1471]: time="2025-01-13T21:28:19.736967248Z" level=info msg="CreateContainer within sandbox \"82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:28:19.754836 containerd[1471]: time="2025-01-13T21:28:19.754785512Z" level=info msg="CreateContainer within sandbox \"82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"863da581661df37bb7c583d85142ac28fdf9eb10767cd0b2ac56914125a75de1\"" Jan 13 21:28:19.755869 containerd[1471]: time="2025-01-13T21:28:19.755827176Z" level=info msg="StartContainer for \"863da581661df37bb7c583d85142ac28fdf9eb10767cd0b2ac56914125a75de1\"" Jan 13 21:28:19.797287 systemd[1]: Started cri-containerd-863da581661df37bb7c583d85142ac28fdf9eb10767cd0b2ac56914125a75de1.scope - libcontainer container 863da581661df37bb7c583d85142ac28fdf9eb10767cd0b2ac56914125a75de1. Jan 13 21:28:19.831982 containerd[1471]: time="2025-01-13T21:28:19.831731469Z" level=info msg="StartContainer for \"863da581661df37bb7c583d85142ac28fdf9eb10767cd0b2ac56914125a75de1\" returns successfully" Jan 13 21:28:20.534943 kubelet[2579]: I0113 21:28:20.534851 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vz9hf" podStartSLOduration=29.534826711 podStartE2EDuration="29.534826711s" podCreationTimestamp="2025-01-13 21:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:28:20.533995529 +0000 UTC m=+34.390298589" watchObservedRunningTime="2025-01-13 21:28:20.534826711 +0000 UTC m=+34.391129759" Jan 13 21:28:21.180246 systemd-networkd[1379]: cali787494e4002: Gained IPv6LL Jan 13 21:28:21.307081 containerd[1471]: time="2025-01-13T21:28:21.306086651Z" level=info msg="StopPodSandbox for \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\"" Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.369 [INFO][4048] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.369 [INFO][4048] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" iface="eth0" netns="/var/run/netns/cni-2287dde6-da0f-8f2a-9960-03777f5fc4fc" Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.370 [INFO][4048] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" iface="eth0" netns="/var/run/netns/cni-2287dde6-da0f-8f2a-9960-03777f5fc4fc" Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.371 [INFO][4048] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" iface="eth0" netns="/var/run/netns/cni-2287dde6-da0f-8f2a-9960-03777f5fc4fc" Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.371 [INFO][4048] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.371 [INFO][4048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.398 [INFO][4055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" HandleID="k8s-pod-network.6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.399 [INFO][4055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.399 [INFO][4055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.410 [WARNING][4055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" HandleID="k8s-pod-network.6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.410 [INFO][4055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" HandleID="k8s-pod-network.6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.412 [INFO][4055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:21.414945 containerd[1471]: 2025-01-13 21:28:21.413 [INFO][4048] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:21.418098 containerd[1471]: time="2025-01-13T21:28:21.416167108Z" level=info msg="TearDown network for sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\" successfully" Jan 13 21:28:21.418098 containerd[1471]: time="2025-01-13T21:28:21.416216531Z" level=info msg="StopPodSandbox for \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\" returns successfully" Jan 13 21:28:21.418098 containerd[1471]: time="2025-01-13T21:28:21.416933514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c964784-84gdc,Uid:4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:28:21.422325 systemd[1]: run-netns-cni\x2d2287dde6\x2dda0f\x2d8f2a\x2d9960\x2d03777f5fc4fc.mount: Deactivated successfully. Jan 13 21:28:21.572125 systemd-networkd[1379]: calie3261eac2f7: Link UP Jan 13 21:28:21.573463 systemd-networkd[1379]: calie3261eac2f7: Gained carrier Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.484 [INFO][4061] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0 calico-apiserver-6d4c964784- calico-apiserver 4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4 745 0 2025-01-13 21:27:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d4c964784 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal calico-apiserver-6d4c964784-84gdc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie3261eac2f7 [] []}} ContainerID="ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-84gdc" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.484 [INFO][4061] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-84gdc" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.522 [INFO][4072] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" HandleID="k8s-pod-network.ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.533 [INFO][4072] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" HandleID="k8s-pod-network.ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000507b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", "pod":"calico-apiserver-6d4c964784-84gdc", "timestamp":"2025-01-13 21:28:21.522130019 +0000 UTC"}, Hostname:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.533 [INFO][4072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.533 [INFO][4072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.533 [INFO][4072] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal' Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.536 [INFO][4072] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.540 [INFO][4072] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.545 [INFO][4072] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.547 [INFO][4072] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.550 [INFO][4072] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.550 [INFO][4072] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.551 [INFO][4072] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.556 [INFO][4072] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.564 [INFO][4072] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.194/26] block=192.168.32.192/26 handle="k8s-pod-network.ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.564 [INFO][4072] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.194/26] handle="k8s-pod-network.ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.564 [INFO][4072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:21.595108 containerd[1471]: 2025-01-13 21:28:21.564 [INFO][4072] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.194/26] IPv6=[] ContainerID="ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" HandleID="k8s-pod-network.ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:21.597022 containerd[1471]: 2025-01-13 21:28:21.567 [INFO][4061] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-84gdc" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0", GenerateName:"calico-apiserver-6d4c964784-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c964784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-6d4c964784-84gdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3261eac2f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:21.597022 containerd[1471]: 2025-01-13 21:28:21.567 [INFO][4061] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.194/32] ContainerID="ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-84gdc" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:21.597022 containerd[1471]: 2025-01-13 21:28:21.567 [INFO][4061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3261eac2f7 ContainerID="ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-84gdc" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:21.597022 containerd[1471]: 2025-01-13 21:28:21.571 [INFO][4061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-84gdc" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:21.597022 containerd[1471]: 2025-01-13 21:28:21.572 [INFO][4061] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-84gdc" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0", GenerateName:"calico-apiserver-6d4c964784-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c964784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf", Pod:"calico-apiserver-6d4c964784-84gdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3261eac2f7", MAC:"26:05:f1:40:ed:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:21.597022 containerd[1471]: 2025-01-13 21:28:21.590 [INFO][4061] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-84gdc" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:21.631473 containerd[1471]: time="2025-01-13T21:28:21.631366188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:21.631724 containerd[1471]: time="2025-01-13T21:28:21.631439973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:21.631724 containerd[1471]: time="2025-01-13T21:28:21.631465547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:21.631724 containerd[1471]: time="2025-01-13T21:28:21.631619760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:21.670373 systemd[1]: Started cri-containerd-ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf.scope - libcontainer container ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf. Jan 13 21:28:21.725763 containerd[1471]: time="2025-01-13T21:28:21.725656625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c964784-84gdc,Uid:4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf\"" Jan 13 21:28:21.728234 containerd[1471]: time="2025-01-13T21:28:21.727897123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:28:22.308303 containerd[1471]: time="2025-01-13T21:28:22.307973762Z" level=info msg="StopPodSandbox for \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\"" Jan 13 21:28:22.312416 containerd[1471]: time="2025-01-13T21:28:22.311552403Z" level=info msg="StopPodSandbox for \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\"" Jan 13 21:28:22.317416 containerd[1471]: time="2025-01-13T21:28:22.316844060Z" level=info msg="StopPodSandbox for \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\"" Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.443 [INFO][4171] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.443 [INFO][4171] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" iface="eth0" netns="/var/run/netns/cni-05fc7a09-57fe-7d65-a2e9-d5327d934558" Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.445 [INFO][4171] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" iface="eth0" netns="/var/run/netns/cni-05fc7a09-57fe-7d65-a2e9-d5327d934558" Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.445 [INFO][4171] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" iface="eth0" netns="/var/run/netns/cni-05fc7a09-57fe-7d65-a2e9-d5327d934558" Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.446 [INFO][4171] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.446 [INFO][4171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.519 [INFO][4190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" HandleID="k8s-pod-network.9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.520 [INFO][4190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.520 [INFO][4190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.543 [WARNING][4190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" HandleID="k8s-pod-network.9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.543 [INFO][4190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" HandleID="k8s-pod-network.9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.548 [INFO][4190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:22.561159 containerd[1471]: 2025-01-13 21:28:22.557 [INFO][4171] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:22.565391 containerd[1471]: time="2025-01-13T21:28:22.561924694Z" level=info msg="TearDown network for sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\" successfully" Jan 13 21:28:22.565391 containerd[1471]: time="2025-01-13T21:28:22.561972870Z" level=info msg="StopPodSandbox for \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\" returns successfully" Jan 13 21:28:22.567289 containerd[1471]: time="2025-01-13T21:28:22.566040014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d4lrn,Uid:b1bd5e1f-f801-477d-8fd5-44146cbed3de,Namespace:kube-system,Attempt:1,}" Jan 13 21:28:22.570482 systemd[1]: run-netns-cni\x2d05fc7a09\x2d57fe\x2d7d65\x2da2e9\x2dd5327d934558.mount: Deactivated successfully. Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.453 [INFO][4167] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.453 [INFO][4167] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" iface="eth0" netns="/var/run/netns/cni-375aceed-f966-d752-7a81-22408f8aeb0b" Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.454 [INFO][4167] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" iface="eth0" netns="/var/run/netns/cni-375aceed-f966-d752-7a81-22408f8aeb0b" Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.457 [INFO][4167] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" iface="eth0" netns="/var/run/netns/cni-375aceed-f966-d752-7a81-22408f8aeb0b" Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.457 [INFO][4167] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.458 [INFO][4167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.546 [INFO][4192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" HandleID="k8s-pod-network.acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.548 [INFO][4192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.548 [INFO][4192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.564 [WARNING][4192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" HandleID="k8s-pod-network.acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.565 [INFO][4192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" HandleID="k8s-pod-network.acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.574 [INFO][4192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:22.586060 containerd[1471]: 2025-01-13 21:28:22.581 [INFO][4167] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:22.589090 containerd[1471]: time="2025-01-13T21:28:22.587298527Z" level=info msg="TearDown network for sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\" successfully" Jan 13 21:28:22.589090 containerd[1471]: time="2025-01-13T21:28:22.587336911Z" level=info msg="StopPodSandbox for \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\" returns successfully" Jan 13 21:28:22.600607 containerd[1471]: time="2025-01-13T21:28:22.600430295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c964784-mgmhs,Uid:bd143408-05e7-4dc4-9e36-d11bd741a281,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:28:22.600825 systemd[1]: run-netns-cni\x2d375aceed\x2df966\x2dd752\x2d7a81\x2d22408f8aeb0b.mount: Deactivated successfully. Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.459 [INFO][4175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.460 [INFO][4175] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" iface="eth0" netns="/var/run/netns/cni-f161d745-3c14-25bf-c19d-12a3abccfc49" Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.462 [INFO][4175] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" iface="eth0" netns="/var/run/netns/cni-f161d745-3c14-25bf-c19d-12a3abccfc49" Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.465 [INFO][4175] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" iface="eth0" netns="/var/run/netns/cni-f161d745-3c14-25bf-c19d-12a3abccfc49" Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.465 [INFO][4175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.465 [INFO][4175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.571 [INFO][4200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" HandleID="k8s-pod-network.8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.571 [INFO][4200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.575 [INFO][4200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.598 [WARNING][4200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" HandleID="k8s-pod-network.8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.600 [INFO][4200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" HandleID="k8s-pod-network.8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.607 [INFO][4200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:22.636869 containerd[1471]: 2025-01-13 21:28:22.615 [INFO][4175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:22.642077 containerd[1471]: time="2025-01-13T21:28:22.641250548Z" level=info msg="TearDown network for sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\" successfully" Jan 13 21:28:22.642077 containerd[1471]: time="2025-01-13T21:28:22.641345190Z" level=info msg="StopPodSandbox for \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\" returns successfully" Jan 13 21:28:22.647430 systemd[1]: run-netns-cni\x2df161d745\x2d3c14\x2d25bf\x2dc19d\x2d12a3abccfc49.mount: Deactivated successfully. Jan 13 21:28:22.650520 containerd[1471]: time="2025-01-13T21:28:22.649194124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76vfx,Uid:1eede6f8-94e3-4a63-bb4e-723906a70abc,Namespace:calico-system,Attempt:1,}" Jan 13 21:28:22.717103 systemd-networkd[1379]: calie3261eac2f7: Gained IPv6LL Jan 13 21:28:22.953728 systemd-networkd[1379]: calie75c644dc4c: Link UP Jan 13 21:28:22.957408 systemd-networkd[1379]: calie75c644dc4c: Gained carrier Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.762 [INFO][4211] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0 coredns-6f6b679f8f- kube-system b1bd5e1f-f801-477d-8fd5-44146cbed3de 755 0 2025-01-13 21:27:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal coredns-6f6b679f8f-d4lrn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie75c644dc4c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" Namespace="kube-system" Pod="coredns-6f6b679f8f-d4lrn" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.763 [INFO][4211] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" Namespace="kube-system" Pod="coredns-6f6b679f8f-d4lrn" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.846 [INFO][4251] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" HandleID="k8s-pod-network.4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.870 [INFO][4251] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" HandleID="k8s-pod-network.4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd270), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", "pod":"coredns-6f6b679f8f-d4lrn", "timestamp":"2025-01-13 21:28:22.846938206 +0000 UTC"}, Hostname:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.870 [INFO][4251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.870 [INFO][4251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.870 [INFO][4251] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal' Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.875 [INFO][4251] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.881 [INFO][4251] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.889 [INFO][4251] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.891 [INFO][4251] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.898 [INFO][4251] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.898 [INFO][4251] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.901 [INFO][4251] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.912 [INFO][4251] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.926 [INFO][4251] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.195/26] block=192.168.32.192/26 handle="k8s-pod-network.4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.927 [INFO][4251] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.195/26] handle="k8s-pod-network.4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.928 [INFO][4251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:22.996207 containerd[1471]: 2025-01-13 21:28:22.928 [INFO][4251] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.195/26] IPv6=[] ContainerID="4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" HandleID="k8s-pod-network.4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:22.998492 containerd[1471]: 2025-01-13 21:28:22.940 [INFO][4211] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" Namespace="kube-system" Pod="coredns-6f6b679f8f-d4lrn" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b1bd5e1f-f801-477d-8fd5-44146cbed3de", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-6f6b679f8f-d4lrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie75c644dc4c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:22.998492 containerd[1471]: 2025-01-13 21:28:22.943 [INFO][4211] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.195/32] ContainerID="4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" Namespace="kube-system" Pod="coredns-6f6b679f8f-d4lrn" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:22.998492 containerd[1471]: 2025-01-13 21:28:22.943 [INFO][4211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie75c644dc4c ContainerID="4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" Namespace="kube-system" Pod="coredns-6f6b679f8f-d4lrn" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:22.998492 containerd[1471]: 2025-01-13 21:28:22.954 [INFO][4211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" Namespace="kube-system" Pod="coredns-6f6b679f8f-d4lrn" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:22.998492 containerd[1471]: 2025-01-13 21:28:22.956 [INFO][4211] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" Namespace="kube-system" Pod="coredns-6f6b679f8f-d4lrn" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b1bd5e1f-f801-477d-8fd5-44146cbed3de", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c", Pod:"coredns-6f6b679f8f-d4lrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie75c644dc4c", MAC:"ce:cc:37:b7:25:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:22.998492 containerd[1471]: 2025-01-13 21:28:22.985 [INFO][4211] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c" Namespace="kube-system" Pod="coredns-6f6b679f8f-d4lrn" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:23.062392 containerd[1471]: time="2025-01-13T21:28:23.062285714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:23.062593 containerd[1471]: time="2025-01-13T21:28:23.062366771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:23.062593 containerd[1471]: time="2025-01-13T21:28:23.062390225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:23.062593 containerd[1471]: time="2025-01-13T21:28:23.062498898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:23.108657 systemd[1]: Started cri-containerd-4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c.scope - libcontainer container 4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c. Jan 13 21:28:23.160397 systemd-networkd[1379]: calia3c3e9edf71: Link UP Jan 13 21:28:23.163028 systemd-networkd[1379]: calia3c3e9edf71: Gained carrier Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:22.773 [INFO][4220] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0 calico-apiserver-6d4c964784- calico-apiserver bd143408-05e7-4dc4-9e36-d11bd741a281 756 0 2025-01-13 21:27:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d4c964784 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal calico-apiserver-6d4c964784-mgmhs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia3c3e9edf71 [] []}} ContainerID="eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-mgmhs" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:22.773 [INFO][4220] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-mgmhs" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:22.931 [INFO][4257] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" HandleID="k8s-pod-network.eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.070 [INFO][4257] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" HandleID="k8s-pod-network.eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038f930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", "pod":"calico-apiserver-6d4c964784-mgmhs", "timestamp":"2025-01-13 21:28:22.931868634 +0000 UTC"}, Hostname:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.070 [INFO][4257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.071 [INFO][4257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.071 [INFO][4257] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal' Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.075 [INFO][4257] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.088 [INFO][4257] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.098 [INFO][4257] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.106 [INFO][4257] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.112 [INFO][4257] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.113 [INFO][4257] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.116 [INFO][4257] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8 Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.124 [INFO][4257] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.140 [INFO][4257] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.196/26] block=192.168.32.192/26 handle="k8s-pod-network.eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.140 [INFO][4257] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.196/26] handle="k8s-pod-network.eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.140 [INFO][4257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:23.226662 containerd[1471]: 2025-01-13 21:28:23.140 [INFO][4257] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.196/26] IPv6=[] ContainerID="eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" HandleID="k8s-pod-network.eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:23.227840 containerd[1471]: 2025-01-13 21:28:23.146 [INFO][4220] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-mgmhs" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0", GenerateName:"calico-apiserver-6d4c964784-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd143408-05e7-4dc4-9e36-d11bd741a281", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c964784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-6d4c964784-mgmhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3c3e9edf71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:23.227840 containerd[1471]: 2025-01-13 21:28:23.146 [INFO][4220] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.196/32] ContainerID="eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-mgmhs" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:23.227840 containerd[1471]: 2025-01-13 21:28:23.146 [INFO][4220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3c3e9edf71 ContainerID="eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-mgmhs" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:23.227840 containerd[1471]: 2025-01-13 21:28:23.165 [INFO][4220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-mgmhs" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:23.227840 containerd[1471]: 2025-01-13 21:28:23.170 [INFO][4220] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-mgmhs" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0", GenerateName:"calico-apiserver-6d4c964784-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd143408-05e7-4dc4-9e36-d11bd741a281", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c964784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8", Pod:"calico-apiserver-6d4c964784-mgmhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3c3e9edf71", MAC:"de:92:a3:68:e7:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:23.227840 containerd[1471]: 2025-01-13 21:28:23.218 [INFO][4220] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c964784-mgmhs" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:23.295618 systemd-networkd[1379]: cali7fd2cd81427: Link UP Jan 13 21:28:23.297472 systemd-networkd[1379]: cali7fd2cd81427: Gained carrier Jan 13 21:28:23.308286 containerd[1471]: time="2025-01-13T21:28:23.308197862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d4lrn,Uid:b1bd5e1f-f801-477d-8fd5-44146cbed3de,Namespace:kube-system,Attempt:1,} returns sandbox id \"4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c\"" Jan 13 21:28:23.309922 containerd[1471]: time="2025-01-13T21:28:23.309098914Z" level=info msg="StopPodSandbox for \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\"" Jan 13 21:28:23.331287 containerd[1471]: time="2025-01-13T21:28:23.330829485Z" level=info msg="CreateContainer within sandbox \"4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:22.846 [INFO][4231] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0 csi-node-driver- calico-system 1eede6f8-94e3-4a63-bb4e-723906a70abc 757 0 2025-01-13 21:27:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal csi-node-driver-76vfx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7fd2cd81427 [] []}} ContainerID="9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" Namespace="calico-system" Pod="csi-node-driver-76vfx" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:22.846 [INFO][4231] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" Namespace="calico-system" Pod="csi-node-driver-76vfx" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:22.978 [INFO][4263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" HandleID="k8s-pod-network.9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.080 [INFO][4263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" HandleID="k8s-pod-network.9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a4190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", "pod":"csi-node-driver-76vfx", "timestamp":"2025-01-13 21:28:22.978339917 +0000 UTC"}, Hostname:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.080 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.142 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.142 [INFO][4263] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal' Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.181 [INFO][4263] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.201 [INFO][4263] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.215 [INFO][4263] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.222 [INFO][4263] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.238 [INFO][4263] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.238 [INFO][4263] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.241 [INFO][4263] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321 Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.262 [INFO][4263] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.277 [INFO][4263] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.197/26] block=192.168.32.192/26 handle="k8s-pod-network.9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.278 [INFO][4263] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.197/26] handle="k8s-pod-network.9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.278 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:23.351886 containerd[1471]: 2025-01-13 21:28:23.278 [INFO][4263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.197/26] IPv6=[] ContainerID="9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" HandleID="k8s-pod-network.9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:23.354037 containerd[1471]: 2025-01-13 21:28:23.287 [INFO][4231] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" Namespace="calico-system" Pod="csi-node-driver-76vfx" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1eede6f8-94e3-4a63-bb4e-723906a70abc", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-76vfx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7fd2cd81427", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:23.354037 containerd[1471]: 2025-01-13 21:28:23.287 [INFO][4231] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.197/32] ContainerID="9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" Namespace="calico-system" Pod="csi-node-driver-76vfx" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:23.354037 containerd[1471]: 2025-01-13 21:28:23.289 [INFO][4231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fd2cd81427 ContainerID="9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" Namespace="calico-system" Pod="csi-node-driver-76vfx" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:23.354037 containerd[1471]: 2025-01-13 21:28:23.296 [INFO][4231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" Namespace="calico-system" Pod="csi-node-driver-76vfx" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:23.354037 containerd[1471]: 2025-01-13 21:28:23.302 [INFO][4231] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" Namespace="calico-system" Pod="csi-node-driver-76vfx" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1eede6f8-94e3-4a63-bb4e-723906a70abc", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321", Pod:"csi-node-driver-76vfx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7fd2cd81427", MAC:"52:f5:1b:6f:67:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:23.354037 containerd[1471]: 2025-01-13 21:28:23.346 [INFO][4231] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321" Namespace="calico-system" Pod="csi-node-driver-76vfx" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:23.372104 containerd[1471]: time="2025-01-13T21:28:23.370539208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:23.372104 containerd[1471]: time="2025-01-13T21:28:23.370858863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:23.372104 containerd[1471]: time="2025-01-13T21:28:23.370994876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:23.373512 containerd[1471]: time="2025-01-13T21:28:23.373347323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:23.421425 containerd[1471]: time="2025-01-13T21:28:23.421362518Z" level=info msg="CreateContainer within sandbox \"4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7bc2990b05522836199f7a692861d41c133b241215c30b3087eb000ad09da9a9\"" Jan 13 21:28:23.425625 containerd[1471]: time="2025-01-13T21:28:23.424892095Z" level=info msg="StartContainer for \"7bc2990b05522836199f7a692861d41c133b241215c30b3087eb000ad09da9a9\"" Jan 13 21:28:23.462041 systemd[1]: Started cri-containerd-eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8.scope - libcontainer container eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8. Jan 13 21:28:23.497871 containerd[1471]: time="2025-01-13T21:28:23.496410242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:23.497871 containerd[1471]: time="2025-01-13T21:28:23.496504157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:23.497871 containerd[1471]: time="2025-01-13T21:28:23.496527735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:23.497871 containerd[1471]: time="2025-01-13T21:28:23.496651456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:23.632824 systemd[1]: Started cri-containerd-7bc2990b05522836199f7a692861d41c133b241215c30b3087eb000ad09da9a9.scope - libcontainer container 7bc2990b05522836199f7a692861d41c133b241215c30b3087eb000ad09da9a9. Jan 13 21:28:23.640270 systemd[1]: Started cri-containerd-9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321.scope - libcontainer container 9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321. Jan 13 21:28:23.715612 containerd[1471]: time="2025-01-13T21:28:23.713734390Z" level=info msg="StartContainer for \"7bc2990b05522836199f7a692861d41c133b241215c30b3087eb000ad09da9a9\" returns successfully" Jan 13 21:28:23.728675 containerd[1471]: time="2025-01-13T21:28:23.728562864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c964784-mgmhs,Uid:bd143408-05e7-4dc4-9e36-d11bd741a281,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8\"" Jan 13 21:28:23.780985 containerd[1471]: time="2025-01-13T21:28:23.780408122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76vfx,Uid:1eede6f8-94e3-4a63-bb4e-723906a70abc,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321\"" Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.574 [INFO][4376] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.576 [INFO][4376] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" iface="eth0" netns="/var/run/netns/cni-8947c51c-ba1c-8c6d-5716-99ebd49414ae" Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.577 [INFO][4376] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" iface="eth0" netns="/var/run/netns/cni-8947c51c-ba1c-8c6d-5716-99ebd49414ae" Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.594 [INFO][4376] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" iface="eth0" netns="/var/run/netns/cni-8947c51c-ba1c-8c6d-5716-99ebd49414ae" Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.594 [INFO][4376] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.594 [INFO][4376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.771 [INFO][4462] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" HandleID="k8s-pod-network.ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.772 [INFO][4462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.772 [INFO][4462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.800 [WARNING][4462] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" HandleID="k8s-pod-network.ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.800 [INFO][4462] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" HandleID="k8s-pod-network.ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.805 [INFO][4462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:23.813759 containerd[1471]: 2025-01-13 21:28:23.808 [INFO][4376] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:23.817883 containerd[1471]: time="2025-01-13T21:28:23.817160931Z" level=info msg="TearDown network for sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\" successfully" Jan 13 21:28:23.820186 containerd[1471]: time="2025-01-13T21:28:23.820110581Z" level=info msg="StopPodSandbox for \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\" returns successfully" Jan 13 21:28:23.822431 systemd[1]: run-netns-cni\x2d8947c51c\x2dba1c\x2d8c6d\x2d5716\x2d99ebd49414ae.mount: Deactivated successfully. Jan 13 21:28:23.824562 containerd[1471]: time="2025-01-13T21:28:23.823333834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b795dcbb4-fq58p,Uid:b0798d6b-8ec9-490c-a862-c6be718179f8,Namespace:calico-system,Attempt:1,}" Jan 13 21:28:24.063092 systemd-networkd[1379]: cali46bfd9e7ca7: Link UP Jan 13 21:28:24.069163 systemd-networkd[1379]: cali46bfd9e7ca7: Gained carrier Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:23.929 [INFO][4510] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0 calico-kube-controllers-7b795dcbb4- calico-system b0798d6b-8ec9-490c-a862-c6be718179f8 771 0 2025-01-13 21:27:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b795dcbb4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal calico-kube-controllers-7b795dcbb4-fq58p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali46bfd9e7ca7 [] []}} ContainerID="c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" Namespace="calico-system" Pod="calico-kube-controllers-7b795dcbb4-fq58p" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:23.930 [INFO][4510] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" Namespace="calico-system" Pod="calico-kube-controllers-7b795dcbb4-fq58p" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:23.994 [INFO][4520] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" HandleID="k8s-pod-network.c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.010 [INFO][4520] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" HandleID="k8s-pod-network.c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285680), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", "pod":"calico-kube-controllers-7b795dcbb4-fq58p", "timestamp":"2025-01-13 21:28:23.994819766 +0000 UTC"}, Hostname:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.010 [INFO][4520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.011 [INFO][4520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.011 [INFO][4520] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal' Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.014 [INFO][4520] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.020 [INFO][4520] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.027 [INFO][4520] ipam/ipam.go 489: Trying affinity for 192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.031 [INFO][4520] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.034 [INFO][4520] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.034 [INFO][4520] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.037 [INFO][4520] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.044 [INFO][4520] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.054 [INFO][4520] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.198/26] block=192.168.32.192/26 handle="k8s-pod-network.c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.054 [INFO][4520] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.198/26] handle="k8s-pod-network.c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" host="ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal" Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.054 [INFO][4520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:24.098648 containerd[1471]: 2025-01-13 21:28:24.054 [INFO][4520] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.198/26] IPv6=[] ContainerID="c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" HandleID="k8s-pod-network.c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:24.103236 containerd[1471]: 2025-01-13 21:28:24.058 [INFO][4510] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" Namespace="calico-system" Pod="calico-kube-controllers-7b795dcbb4-fq58p" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0", GenerateName:"calico-kube-controllers-7b795dcbb4-", Namespace:"calico-system", SelfLink:"", UID:"b0798d6b-8ec9-490c-a862-c6be718179f8", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b795dcbb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-7b795dcbb4-fq58p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46bfd9e7ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:24.103236 containerd[1471]: 2025-01-13 21:28:24.059 [INFO][4510] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.198/32] ContainerID="c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" Namespace="calico-system" Pod="calico-kube-controllers-7b795dcbb4-fq58p" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:24.103236 containerd[1471]: 2025-01-13 21:28:24.059 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46bfd9e7ca7 ContainerID="c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" Namespace="calico-system" Pod="calico-kube-controllers-7b795dcbb4-fq58p" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:24.103236 containerd[1471]: 2025-01-13 21:28:24.062 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" Namespace="calico-system" Pod="calico-kube-controllers-7b795dcbb4-fq58p" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:24.103236 containerd[1471]: 2025-01-13 21:28:24.066 [INFO][4510] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" Namespace="calico-system" Pod="calico-kube-controllers-7b795dcbb4-fq58p" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0", GenerateName:"calico-kube-controllers-7b795dcbb4-", Namespace:"calico-system", SelfLink:"", UID:"b0798d6b-8ec9-490c-a862-c6be718179f8", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b795dcbb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c", Pod:"calico-kube-controllers-7b795dcbb4-fq58p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46bfd9e7ca7", MAC:"de:ee:1c:2e:6f:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:24.103236 containerd[1471]: 2025-01-13 21:28:24.089 [INFO][4510] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c" Namespace="calico-system" Pod="calico-kube-controllers-7b795dcbb4-fq58p" WorkloadEndpoint="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:24.164556 containerd[1471]: time="2025-01-13T21:28:24.164380207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:24.166589 containerd[1471]: time="2025-01-13T21:28:24.165935114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:24.166589 containerd[1471]: time="2025-01-13T21:28:24.165973223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:24.167999 containerd[1471]: time="2025-01-13T21:28:24.167612243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:24.201376 systemd[1]: Started cri-containerd-c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c.scope - libcontainer container c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c. Jan 13 21:28:24.289317 containerd[1471]: time="2025-01-13T21:28:24.289215504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b795dcbb4-fq58p,Uid:b0798d6b-8ec9-490c-a862-c6be718179f8,Namespace:calico-system,Attempt:1,} returns sandbox id \"c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c\"" Jan 13 21:28:24.619174 kubelet[2579]: I0113 21:28:24.619077 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-d4lrn" podStartSLOduration=33.619024158 podStartE2EDuration="33.619024158s" podCreationTimestamp="2025-01-13 21:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:28:24.592116041 +0000 UTC m=+38.448419089" watchObservedRunningTime="2025-01-13 21:28:24.619024158 +0000 UTC m=+38.475327208" Jan 13 21:28:24.700415 systemd-networkd[1379]: cali7fd2cd81427: Gained IPv6LL Jan 13 21:28:24.828496 systemd-networkd[1379]: calie75c644dc4c: Gained IPv6LL Jan 13 21:28:25.058997 containerd[1471]: time="2025-01-13T21:28:25.058937405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:25.060324 containerd[1471]: time="2025-01-13T21:28:25.060208270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 21:28:25.061180 containerd[1471]: time="2025-01-13T21:28:25.061133917Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:25.064483 containerd[1471]: time="2025-01-13T21:28:25.064439913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:25.066108 containerd[1471]: time="2025-01-13T21:28:25.065805778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.33786137s" Jan 13 21:28:25.066108 containerd[1471]: time="2025-01-13T21:28:25.065853727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:28:25.068108 containerd[1471]: time="2025-01-13T21:28:25.067769595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:28:25.070568 containerd[1471]: time="2025-01-13T21:28:25.069978999Z" level=info msg="CreateContainer within sandbox \"ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:28:25.084363 systemd-networkd[1379]: calia3c3e9edf71: Gained IPv6LL Jan 13 21:28:25.090634 containerd[1471]: time="2025-01-13T21:28:25.090459002Z" level=info msg="CreateContainer within sandbox \"ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"715e74b6935ca073cb64123d36857d9675026c1ef2a5a04ae389d43daa24abc2\"" Jan 13 21:28:25.093920 containerd[1471]: time="2025-01-13T21:28:25.093449901Z" level=info msg="StartContainer for \"715e74b6935ca073cb64123d36857d9675026c1ef2a5a04ae389d43daa24abc2\"" Jan 13 21:28:25.150260 systemd[1]: Started cri-containerd-715e74b6935ca073cb64123d36857d9675026c1ef2a5a04ae389d43daa24abc2.scope - libcontainer container 715e74b6935ca073cb64123d36857d9675026c1ef2a5a04ae389d43daa24abc2. Jan 13 21:28:25.211454 containerd[1471]: time="2025-01-13T21:28:25.211398172Z" level=info msg="StartContainer for \"715e74b6935ca073cb64123d36857d9675026c1ef2a5a04ae389d43daa24abc2\" returns successfully" Jan 13 21:28:25.213301 systemd-networkd[1379]: cali46bfd9e7ca7: Gained IPv6LL Jan 13 21:28:25.273093 containerd[1471]: time="2025-01-13T21:28:25.272425626Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:25.275553 containerd[1471]: time="2025-01-13T21:28:25.275487123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 21:28:25.278788 containerd[1471]: time="2025-01-13T21:28:25.278723178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 210.911519ms" Jan 13 21:28:25.278938 containerd[1471]: time="2025-01-13T21:28:25.278794424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:28:25.281494 containerd[1471]: time="2025-01-13T21:28:25.281261001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:28:25.284013 containerd[1471]: time="2025-01-13T21:28:25.283528453Z" level=info msg="CreateContainer within sandbox \"eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:28:25.307856 containerd[1471]: time="2025-01-13T21:28:25.307597253Z" level=info msg="CreateContainer within sandbox \"eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"67ad7f5624e93ebab3ea4da6a106f6066f680a76ee2168f17b83b40e10864519\"" Jan 13 21:28:25.311335 containerd[1471]: time="2025-01-13T21:28:25.311198439Z" level=info msg="StartContainer for \"67ad7f5624e93ebab3ea4da6a106f6066f680a76ee2168f17b83b40e10864519\"" Jan 13 21:28:25.387307 systemd[1]: Started cri-containerd-67ad7f5624e93ebab3ea4da6a106f6066f680a76ee2168f17b83b40e10864519.scope - libcontainer container 67ad7f5624e93ebab3ea4da6a106f6066f680a76ee2168f17b83b40e10864519. Jan 13 21:28:25.458589 containerd[1471]: time="2025-01-13T21:28:25.458540483Z" level=info msg="StartContainer for \"67ad7f5624e93ebab3ea4da6a106f6066f680a76ee2168f17b83b40e10864519\" returns successfully" Jan 13 21:28:25.572545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910691603.mount: Deactivated successfully. Jan 13 21:28:25.647361 kubelet[2579]: I0113 21:28:25.647245 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d4c964784-mgmhs" podStartSLOduration=27.101635883 podStartE2EDuration="28.647195936s" podCreationTimestamp="2025-01-13 21:27:57 +0000 UTC" firstStartedPulling="2025-01-13 21:28:23.734386152 +0000 UTC m=+37.590689181" lastFinishedPulling="2025-01-13 21:28:25.279946183 +0000 UTC m=+39.136249234" observedRunningTime="2025-01-13 21:28:25.622026334 +0000 UTC m=+39.478329383" watchObservedRunningTime="2025-01-13 21:28:25.647195936 +0000 UTC m=+39.503498985" Jan 13 21:28:25.648763 kubelet[2579]: I0113 21:28:25.648176 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d4c964784-84gdc" podStartSLOduration=25.307989913 podStartE2EDuration="28.64815943s" podCreationTimestamp="2025-01-13 21:27:57 +0000 UTC" firstStartedPulling="2025-01-13 21:28:21.727409957 +0000 UTC m=+35.583712981" lastFinishedPulling="2025-01-13 21:28:25.067579465 +0000 UTC m=+38.923882498" observedRunningTime="2025-01-13 21:28:25.646745516 +0000 UTC m=+39.503048565" watchObservedRunningTime="2025-01-13 21:28:25.64815943 +0000 UTC m=+39.504462484" Jan 13 21:28:26.575864 containerd[1471]: time="2025-01-13T21:28:26.575804447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:26.578000 containerd[1471]: time="2025-01-13T21:28:26.577717405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:28:26.582324 containerd[1471]: time="2025-01-13T21:28:26.580810218Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:26.588076 containerd[1471]: time="2025-01-13T21:28:26.586283597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:26.591148 containerd[1471]: time="2025-01-13T21:28:26.591099881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.309795831s" Jan 13 21:28:26.591279 containerd[1471]: time="2025-01-13T21:28:26.591152252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:28:26.597320 containerd[1471]: time="2025-01-13T21:28:26.597279474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:28:26.602533 containerd[1471]: time="2025-01-13T21:28:26.602489018Z" level=info msg="CreateContainer within sandbox \"9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:28:26.606954 kubelet[2579]: I0113 21:28:26.606909 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:28:26.608236 kubelet[2579]: I0113 21:28:26.608192 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:28:26.698300 containerd[1471]: time="2025-01-13T21:28:26.697766205Z" level=info msg="CreateContainer within sandbox \"9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"57488857e66fba74bff356499371dc4bff05ad7ab036fad2f0bf1ebc876c0d98\"" Jan 13 21:28:26.703866 containerd[1471]: time="2025-01-13T21:28:26.703809812Z" level=info msg="StartContainer for \"57488857e66fba74bff356499371dc4bff05ad7ab036fad2f0bf1ebc876c0d98\"" Jan 13 21:28:26.799175 systemd[1]: Started cri-containerd-57488857e66fba74bff356499371dc4bff05ad7ab036fad2f0bf1ebc876c0d98.scope - libcontainer container 57488857e66fba74bff356499371dc4bff05ad7ab036fad2f0bf1ebc876c0d98. Jan 13 21:28:26.905690 containerd[1471]: time="2025-01-13T21:28:26.905193494Z" level=info msg="StartContainer for \"57488857e66fba74bff356499371dc4bff05ad7ab036fad2f0bf1ebc876c0d98\" returns successfully" Jan 13 21:28:27.898808 kubelet[2579]: I0113 21:28:27.898700 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:28:27.956330 systemd[1]: run-containerd-runc-k8s.io-8566683c18d614d6007843e45f762b555482cb027a22db4b60fa82d978d94b1e-runc.XWXdNx.mount: Deactivated successfully. Jan 13 21:28:28.007686 ntpd[1436]: Listen normally on 8 vxlan.calico 192.168.32.192:123 Jan 13 21:28:28.011419 ntpd[1436]: 13 Jan 21:28:28 ntpd[1436]: Listen normally on 8 vxlan.calico 192.168.32.192:123 Jan 13 21:28:28.011419 ntpd[1436]: 13 Jan 21:28:28 ntpd[1436]: Listen normally on 9 vxlan.calico [fe80::6450:83ff:fef3:f172%4]:123 Jan 13 21:28:28.011419 ntpd[1436]: 13 Jan 21:28:28 ntpd[1436]: Listen normally on 10 cali787494e4002 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 21:28:28.011419 ntpd[1436]: 13 Jan 21:28:28 ntpd[1436]: Listen normally on 11 calie3261eac2f7 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:28:28.011419 ntpd[1436]: 13 Jan 21:28:28 ntpd[1436]: Listen normally on 12 calie75c644dc4c [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 21:28:28.011419 ntpd[1436]: 13 Jan 21:28:28 ntpd[1436]: Listen normally on 13 calia3c3e9edf71 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 21:28:28.011419 ntpd[1436]: 13 Jan 21:28:28 ntpd[1436]: Listen normally on 14 cali7fd2cd81427 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 21:28:28.011419 ntpd[1436]: 13 Jan 21:28:28 ntpd[1436]: Listen normally on 15 cali46bfd9e7ca7 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:28:28.009924 ntpd[1436]: Listen normally on 9 vxlan.calico [fe80::6450:83ff:fef3:f172%4]:123 Jan 13 21:28:28.010012 ntpd[1436]: Listen normally on 10 cali787494e4002 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 21:28:28.010109 ntpd[1436]: Listen normally on 11 calie3261eac2f7 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:28:28.010168 ntpd[1436]: Listen normally on 12 calie75c644dc4c [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 21:28:28.010222 ntpd[1436]: Listen normally on 13 calia3c3e9edf71 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 21:28:28.010272 ntpd[1436]: Listen normally on 14 cali7fd2cd81427 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 21:28:28.010323 ntpd[1436]: Listen normally on 15 cali46bfd9e7ca7 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:28:28.301288 systemd[1]: run-containerd-runc-k8s.io-8566683c18d614d6007843e45f762b555482cb027a22db4b60fa82d978d94b1e-runc.uY9P92.mount: Deactivated successfully. Jan 13 21:28:29.119207 containerd[1471]: time="2025-01-13T21:28:29.119136648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:29.120683 containerd[1471]: time="2025-01-13T21:28:29.120603667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:28:29.122301 containerd[1471]: time="2025-01-13T21:28:29.122226954Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:29.125491 containerd[1471]: time="2025-01-13T21:28:29.125439847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:29.127315 containerd[1471]: time="2025-01-13T21:28:29.126603853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.529274559s" Jan 13 21:28:29.127315 containerd[1471]: time="2025-01-13T21:28:29.126654024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:28:29.128674 containerd[1471]: time="2025-01-13T21:28:29.128645464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:28:29.154679 containerd[1471]: time="2025-01-13T21:28:29.154628636Z" level=info msg="CreateContainer within sandbox \"c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:28:29.174958 containerd[1471]: time="2025-01-13T21:28:29.174918913Z" level=info msg="CreateContainer within sandbox \"c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8a44efc75533700402bfc8fb3e5726296e017dac99b8d807c8c29f034126b875\"" Jan 13 21:28:29.177321 containerd[1471]: time="2025-01-13T21:28:29.175844331Z" level=info msg="StartContainer for \"8a44efc75533700402bfc8fb3e5726296e017dac99b8d807c8c29f034126b875\"" Jan 13 21:28:29.231276 systemd[1]: Started cri-containerd-8a44efc75533700402bfc8fb3e5726296e017dac99b8d807c8c29f034126b875.scope - libcontainer container 8a44efc75533700402bfc8fb3e5726296e017dac99b8d807c8c29f034126b875. Jan 13 21:28:29.291713 containerd[1471]: time="2025-01-13T21:28:29.291658320Z" level=info msg="StartContainer for \"8a44efc75533700402bfc8fb3e5726296e017dac99b8d807c8c29f034126b875\" returns successfully" Jan 13 21:28:29.651118 kubelet[2579]: I0113 21:28:29.650669 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b795dcbb4-fq58p" podStartSLOduration=27.815813974 podStartE2EDuration="32.650559678s" podCreationTimestamp="2025-01-13 21:27:57 +0000 UTC" firstStartedPulling="2025-01-13 21:28:24.292921422 +0000 UTC m=+38.149224459" lastFinishedPulling="2025-01-13 21:28:29.127667119 +0000 UTC m=+42.983970163" observedRunningTime="2025-01-13 21:28:29.649094165 +0000 UTC m=+43.505397214" watchObservedRunningTime="2025-01-13 21:28:29.650559678 +0000 UTC m=+43.506862727" Jan 13 21:28:30.475129 containerd[1471]: time="2025-01-13T21:28:30.475035034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:30.476562 containerd[1471]: time="2025-01-13T21:28:30.476480235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:28:30.477879 containerd[1471]: time="2025-01-13T21:28:30.477812406Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:30.481144 containerd[1471]: time="2025-01-13T21:28:30.480933648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:30.482462 containerd[1471]: time="2025-01-13T21:28:30.481915595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.353077991s" Jan 13 21:28:30.482462 containerd[1471]: time="2025-01-13T21:28:30.481964901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:28:30.485465 containerd[1471]: time="2025-01-13T21:28:30.485088820Z" level=info msg="CreateContainer within sandbox \"9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:28:30.504456 containerd[1471]: time="2025-01-13T21:28:30.504409322Z" level=info msg="CreateContainer within sandbox \"9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"029d14e0bccbf0815306b5ea8fff26f1bed95ab1f95d66896c857ba3f24730f2\"" Jan 13 21:28:30.506720 containerd[1471]: time="2025-01-13T21:28:30.506329354Z" level=info msg="StartContainer for \"029d14e0bccbf0815306b5ea8fff26f1bed95ab1f95d66896c857ba3f24730f2\"" Jan 13 21:28:30.558291 systemd[1]: Started cri-containerd-029d14e0bccbf0815306b5ea8fff26f1bed95ab1f95d66896c857ba3f24730f2.scope - libcontainer container 029d14e0bccbf0815306b5ea8fff26f1bed95ab1f95d66896c857ba3f24730f2. Jan 13 21:28:30.606912 containerd[1471]: time="2025-01-13T21:28:30.606529018Z" level=info msg="StartContainer for \"029d14e0bccbf0815306b5ea8fff26f1bed95ab1f95d66896c857ba3f24730f2\" returns successfully" Jan 13 21:28:30.654204 kubelet[2579]: I0113 21:28:30.651200 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-76vfx" podStartSLOduration=26.954220481 podStartE2EDuration="33.651176268s" podCreationTimestamp="2025-01-13 21:27:57 +0000 UTC" firstStartedPulling="2025-01-13 21:28:23.786086648 +0000 UTC m=+37.642389679" lastFinishedPulling="2025-01-13 21:28:30.483042434 +0000 UTC m=+44.339345466" observedRunningTime="2025-01-13 21:28:30.651123732 +0000 UTC m=+44.507426780" watchObservedRunningTime="2025-01-13 21:28:30.651176268 +0000 UTC m=+44.507479294" Jan 13 21:28:30.735790 kubelet[2579]: I0113 21:28:30.734613 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:28:31.431363 kubelet[2579]: I0113 21:28:31.431291 2579 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:28:31.431363 kubelet[2579]: I0113 21:28:31.431345 2579 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:28:36.701546 systemd[1]: Started sshd@9-10.128.0.96:22-147.75.109.163:34728.service - OpenSSH per-connection server daemon (147.75.109.163:34728). Jan 13 21:28:37.010165 sshd[4888]: Accepted publickey for core from 147.75.109.163 port 34728 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:28:37.012355 sshd[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:37.018659 systemd-logind[1454]: New session 10 of user core. Jan 13 21:28:37.024446 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:28:37.402126 sshd[4888]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:37.408273 systemd[1]: sshd@9-10.128.0.96:22-147.75.109.163:34728.service: Deactivated successfully. Jan 13 21:28:37.411705 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:28:37.412849 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:28:37.414526 systemd-logind[1454]: Removed session 10. Jan 13 21:28:42.459502 systemd[1]: Started sshd@10-10.128.0.96:22-147.75.109.163:48168.service - OpenSSH per-connection server daemon (147.75.109.163:48168). Jan 13 21:28:42.751117 sshd[4911]: Accepted publickey for core from 147.75.109.163 port 48168 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:28:42.752734 sshd[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:42.759328 systemd-logind[1454]: New session 11 of user core. Jan 13 21:28:42.765304 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:28:43.046964 sshd[4911]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:43.052409 systemd[1]: sshd@10-10.128.0.96:22-147.75.109.163:48168.service: Deactivated successfully. Jan 13 21:28:43.055632 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:28:43.057636 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:28:43.059361 systemd-logind[1454]: Removed session 11. Jan 13 21:28:46.309188 containerd[1471]: time="2025-01-13T21:28:46.308678390Z" level=info msg="StopPodSandbox for \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\"" Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.357 [WARNING][4939] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"169d99a2-83db-446f-8f2b-e2938f3cb74a", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7", Pod:"coredns-6f6b679f8f-vz9hf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali787494e4002", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.357 [INFO][4939] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.357 [INFO][4939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" iface="eth0" netns="" Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.357 [INFO][4939] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.357 [INFO][4939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.387 [INFO][4946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" HandleID="k8s-pod-network.aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.387 [INFO][4946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.388 [INFO][4946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.395 [WARNING][4946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" HandleID="k8s-pod-network.aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.395 [INFO][4946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" HandleID="k8s-pod-network.aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.397 [INFO][4946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:46.400356 containerd[1471]: 2025-01-13 21:28:46.399 [INFO][4939] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:46.400356 containerd[1471]: time="2025-01-13T21:28:46.400379934Z" level=info msg="TearDown network for sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\" successfully" Jan 13 21:28:46.401744 containerd[1471]: time="2025-01-13T21:28:46.400415630Z" level=info msg="StopPodSandbox for \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\" returns successfully" Jan 13 21:28:46.401744 containerd[1471]: time="2025-01-13T21:28:46.401375370Z" level=info msg="RemovePodSandbox for \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\"" Jan 13 21:28:46.401744 containerd[1471]: time="2025-01-13T21:28:46.401488435Z" level=info msg="Forcibly stopping sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\"" Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.453 [WARNING][4964] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"169d99a2-83db-446f-8f2b-e2938f3cb74a", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"82ffc3b9626939e42b5e810dadef72269b39ec94d91f70dad44890645afc5ee7", Pod:"coredns-6f6b679f8f-vz9hf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali787494e4002", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.454 [INFO][4964] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.454 [INFO][4964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" iface="eth0" netns="" Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.454 [INFO][4964] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.454 [INFO][4964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.478 [INFO][4970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" HandleID="k8s-pod-network.aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.478 [INFO][4970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.478 [INFO][4970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.486 [WARNING][4970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" HandleID="k8s-pod-network.aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.486 [INFO][4970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" HandleID="k8s-pod-network.aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vz9hf-eth0" Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.489 [INFO][4970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:46.491988 containerd[1471]: 2025-01-13 21:28:46.490 [INFO][4964] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956" Jan 13 21:28:46.492921 containerd[1471]: time="2025-01-13T21:28:46.492077087Z" level=info msg="TearDown network for sandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\" successfully" Jan 13 21:28:46.497532 containerd[1471]: time="2025-01-13T21:28:46.497462715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:46.497728 containerd[1471]: time="2025-01-13T21:28:46.497563962Z" level=info msg="RemovePodSandbox \"aa358e8a153b1e1dc7c19ca76a2a6a66d96425e7f44bd2feebd536e7dcdd1956\" returns successfully" Jan 13 21:28:46.498375 containerd[1471]: time="2025-01-13T21:28:46.498323254Z" level=info msg="StopPodSandbox for \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\"" Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.542 [WARNING][4988] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0", GenerateName:"calico-apiserver-6d4c964784-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c964784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf", Pod:"calico-apiserver-6d4c964784-84gdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3261eac2f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.542 [INFO][4988] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.542 [INFO][4988] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" iface="eth0" netns="" Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.542 [INFO][4988] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.542 [INFO][4988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.570 [INFO][4994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" HandleID="k8s-pod-network.6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.570 [INFO][4994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.570 [INFO][4994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.578 [WARNING][4994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" HandleID="k8s-pod-network.6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.578 [INFO][4994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" HandleID="k8s-pod-network.6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.580 [INFO][4994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:46.583806 containerd[1471]: 2025-01-13 21:28:46.581 [INFO][4988] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:46.583806 containerd[1471]: time="2025-01-13T21:28:46.583517311Z" level=info msg="TearDown network for sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\" successfully" Jan 13 21:28:46.583806 containerd[1471]: time="2025-01-13T21:28:46.583560073Z" level=info msg="StopPodSandbox for \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\" returns successfully" Jan 13 21:28:46.586139 containerd[1471]: time="2025-01-13T21:28:46.585990807Z" level=info msg="RemovePodSandbox for \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\"" Jan 13 21:28:46.586139 containerd[1471]: time="2025-01-13T21:28:46.586034179Z" level=info msg="Forcibly stopping sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\"" Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.631 [WARNING][5012] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0", GenerateName:"calico-apiserver-6d4c964784-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e0e803f-5d08-4bec-b2f7-1b57af2ab9b4", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c964784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"ed31987968071448329e82f20b082d8442494797beeaf63478f8f3703affc5bf", Pod:"calico-apiserver-6d4c964784-84gdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3261eac2f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.631 [INFO][5012] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.631 [INFO][5012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" iface="eth0" netns="" Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.631 [INFO][5012] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.631 [INFO][5012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.656 [INFO][5018] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" HandleID="k8s-pod-network.6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.656 [INFO][5018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.656 [INFO][5018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.666 [WARNING][5018] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" HandleID="k8s-pod-network.6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.666 [INFO][5018] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" HandleID="k8s-pod-network.6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--84gdc-eth0" Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.667 [INFO][5018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:46.670603 containerd[1471]: 2025-01-13 21:28:46.669 [INFO][5012] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d" Jan 13 21:28:46.670603 containerd[1471]: time="2025-01-13T21:28:46.670636487Z" level=info msg="TearDown network for sandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\" successfully" Jan 13 21:28:46.675439 containerd[1471]: time="2025-01-13T21:28:46.675351424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:46.675581 containerd[1471]: time="2025-01-13T21:28:46.675442274Z" level=info msg="RemovePodSandbox \"6b311cb75121c78b1e79d8338573a98b1c1c14ce511a4b7f9e106a78aa0f3c6d\" returns successfully" Jan 13 21:28:46.676274 containerd[1471]: time="2025-01-13T21:28:46.676169056Z" level=info msg="StopPodSandbox for \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\"" Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.724 [WARNING][5036] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1eede6f8-94e3-4a63-bb4e-723906a70abc", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321", Pod:"csi-node-driver-76vfx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7fd2cd81427", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.724 [INFO][5036] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.724 [INFO][5036] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" iface="eth0" netns="" Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.724 [INFO][5036] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.724 [INFO][5036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.751 [INFO][5042] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" HandleID="k8s-pod-network.8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.751 [INFO][5042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.752 [INFO][5042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.760 [WARNING][5042] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" HandleID="k8s-pod-network.8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.760 [INFO][5042] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" HandleID="k8s-pod-network.8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.761 [INFO][5042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:46.764292 containerd[1471]: 2025-01-13 21:28:46.762 [INFO][5036] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:46.765141 containerd[1471]: time="2025-01-13T21:28:46.764311154Z" level=info msg="TearDown network for sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\" successfully" Jan 13 21:28:46.765141 containerd[1471]: time="2025-01-13T21:28:46.764346414Z" level=info msg="StopPodSandbox for \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\" returns successfully" Jan 13 21:28:46.765141 containerd[1471]: time="2025-01-13T21:28:46.764948115Z" level=info msg="RemovePodSandbox for \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\"" Jan 13 21:28:46.765141 containerd[1471]: time="2025-01-13T21:28:46.764992580Z" level=info msg="Forcibly stopping sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\"" Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.811 [WARNING][5060] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1eede6f8-94e3-4a63-bb4e-723906a70abc", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"9ea28c78f4e625fd986239fd748e7d97e57b98e46c391a526603280fe5002321", Pod:"csi-node-driver-76vfx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7fd2cd81427", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.812 [INFO][5060] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.812 [INFO][5060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" iface="eth0" netns="" Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.812 [INFO][5060] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.812 [INFO][5060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.836 [INFO][5066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" HandleID="k8s-pod-network.8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.836 [INFO][5066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.836 [INFO][5066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.844 [WARNING][5066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" HandleID="k8s-pod-network.8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.844 [INFO][5066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" HandleID="k8s-pod-network.8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-csi--node--driver--76vfx-eth0" Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.846 [INFO][5066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:46.849107 containerd[1471]: 2025-01-13 21:28:46.847 [INFO][5060] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9" Jan 13 21:28:46.850101 containerd[1471]: time="2025-01-13T21:28:46.849084682Z" level=info msg="TearDown network for sandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\" successfully" Jan 13 21:28:46.855762 containerd[1471]: time="2025-01-13T21:28:46.855693375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:46.856043 containerd[1471]: time="2025-01-13T21:28:46.855785443Z" level=info msg="RemovePodSandbox \"8ee9557dd3a86b930b7caa9c9657f1880145a64bd2c563e90880ab62f1d823d9\" returns successfully" Jan 13 21:28:46.856675 containerd[1471]: time="2025-01-13T21:28:46.856347901Z" level=info msg="StopPodSandbox for \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\"" Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.927 [WARNING][5085] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0", GenerateName:"calico-apiserver-6d4c964784-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd143408-05e7-4dc4-9e36-d11bd741a281", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c964784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8", Pod:"calico-apiserver-6d4c964784-mgmhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3c3e9edf71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.927 [INFO][5085] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.927 [INFO][5085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" iface="eth0" netns="" Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.927 [INFO][5085] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.927 [INFO][5085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.954 [INFO][5091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" HandleID="k8s-pod-network.acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.955 [INFO][5091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.955 [INFO][5091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.961 [WARNING][5091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" HandleID="k8s-pod-network.acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.961 [INFO][5091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" HandleID="k8s-pod-network.acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.963 [INFO][5091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:46.966293 containerd[1471]: 2025-01-13 21:28:46.964 [INFO][5085] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:46.966293 containerd[1471]: time="2025-01-13T21:28:46.966234779Z" level=info msg="TearDown network for sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\" successfully" Jan 13 21:28:46.966293 containerd[1471]: time="2025-01-13T21:28:46.966276339Z" level=info msg="StopPodSandbox for \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\" returns successfully" Jan 13 21:28:46.967748 containerd[1471]: time="2025-01-13T21:28:46.966962895Z" level=info msg="RemovePodSandbox for \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\"" Jan 13 21:28:46.967748 containerd[1471]: time="2025-01-13T21:28:46.967002280Z" level=info msg="Forcibly stopping sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\"" Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.027 [WARNING][5109] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0", GenerateName:"calico-apiserver-6d4c964784-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd143408-05e7-4dc4-9e36-d11bd741a281", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c964784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"eaede584f4f9566142854f59349bdb35ba99378d9e447160d80c2438122beac8", Pod:"calico-apiserver-6d4c964784-mgmhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3c3e9edf71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.028 [INFO][5109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.028 [INFO][5109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" iface="eth0" netns="" Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.028 [INFO][5109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.028 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.056 [INFO][5115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" HandleID="k8s-pod-network.acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.056 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.056 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.065 [WARNING][5115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" HandleID="k8s-pod-network.acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.065 [INFO][5115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" HandleID="k8s-pod-network.acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--apiserver--6d4c964784--mgmhs-eth0" Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.066 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:47.069348 containerd[1471]: 2025-01-13 21:28:47.067 [INFO][5109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02" Jan 13 21:28:47.070514 containerd[1471]: time="2025-01-13T21:28:47.069398021Z" level=info msg="TearDown network for sandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\" successfully" Jan 13 21:28:47.074476 containerd[1471]: time="2025-01-13T21:28:47.074412394Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:47.074710 containerd[1471]: time="2025-01-13T21:28:47.074519576Z" level=info msg="RemovePodSandbox \"acfedbc87f541341c9cb0abba3dacaa4e6e8b58d43df7ee0ab4b063d4ec26e02\" returns successfully" Jan 13 21:28:47.075201 containerd[1471]: time="2025-01-13T21:28:47.075168308Z" level=info msg="StopPodSandbox for \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\"" Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.121 [WARNING][5133] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0", GenerateName:"calico-kube-controllers-7b795dcbb4-", Namespace:"calico-system", SelfLink:"", UID:"b0798d6b-8ec9-490c-a862-c6be718179f8", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b795dcbb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c", Pod:"calico-kube-controllers-7b795dcbb4-fq58p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46bfd9e7ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.122 [INFO][5133] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.122 [INFO][5133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" iface="eth0" netns="" Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.122 [INFO][5133] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.122 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.150 [INFO][5139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" HandleID="k8s-pod-network.ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.150 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.150 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.159 [WARNING][5139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" HandleID="k8s-pod-network.ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.159 [INFO][5139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" HandleID="k8s-pod-network.ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.160 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:47.163475 containerd[1471]: 2025-01-13 21:28:47.162 [INFO][5133] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:47.163475 containerd[1471]: time="2025-01-13T21:28:47.163437661Z" level=info msg="TearDown network for sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\" successfully" Jan 13 21:28:47.163475 containerd[1471]: time="2025-01-13T21:28:47.163472411Z" level=info msg="StopPodSandbox for \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\" returns successfully" Jan 13 21:28:47.164394 containerd[1471]: time="2025-01-13T21:28:47.164248656Z" level=info msg="RemovePodSandbox for \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\"" Jan 13 21:28:47.164394 containerd[1471]: time="2025-01-13T21:28:47.164300044Z" level=info msg="Forcibly stopping sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\"" Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.211 [WARNING][5157] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0", GenerateName:"calico-kube-controllers-7b795dcbb4-", Namespace:"calico-system", SelfLink:"", UID:"b0798d6b-8ec9-490c-a862-c6be718179f8", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b795dcbb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"c1990527278d5d5b4b95d2c456d18728f5c7537e34a0a1b8b0810a7d058ac52c", Pod:"calico-kube-controllers-7b795dcbb4-fq58p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46bfd9e7ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.211 [INFO][5157] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.211 [INFO][5157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" iface="eth0" netns="" Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.211 [INFO][5157] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.211 [INFO][5157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.238 [INFO][5163] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" HandleID="k8s-pod-network.ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.238 [INFO][5163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.238 [INFO][5163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.247 [WARNING][5163] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" HandleID="k8s-pod-network.ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.247 [INFO][5163] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" HandleID="k8s-pod-network.ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-calico--kube--controllers--7b795dcbb4--fq58p-eth0" Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.249 [INFO][5163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:47.251774 containerd[1471]: 2025-01-13 21:28:47.250 [INFO][5157] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101" Jan 13 21:28:47.252634 containerd[1471]: time="2025-01-13T21:28:47.251821082Z" level=info msg="TearDown network for sandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\" successfully" Jan 13 21:28:47.260544 containerd[1471]: time="2025-01-13T21:28:47.260481748Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:47.261167 containerd[1471]: time="2025-01-13T21:28:47.260603158Z" level=info msg="RemovePodSandbox \"ac7c3d2a83565f9ff8a1afcc80e04f61a123ffc326046dd5bd701ff3c1764101\" returns successfully" Jan 13 21:28:47.261366 containerd[1471]: time="2025-01-13T21:28:47.261267902Z" level=info msg="StopPodSandbox for \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\"" Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.308 [WARNING][5181] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b1bd5e1f-f801-477d-8fd5-44146cbed3de", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c", Pod:"coredns-6f6b679f8f-d4lrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie75c644dc4c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.309 [INFO][5181] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.309 [INFO][5181] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" iface="eth0" netns="" Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.309 [INFO][5181] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.309 [INFO][5181] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.333 [INFO][5187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" HandleID="k8s-pod-network.9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.333 [INFO][5187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.333 [INFO][5187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.341 [WARNING][5187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" HandleID="k8s-pod-network.9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.341 [INFO][5187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" HandleID="k8s-pod-network.9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.342 [INFO][5187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:47.345347 containerd[1471]: 2025-01-13 21:28:47.343 [INFO][5181] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:47.345347 containerd[1471]: time="2025-01-13T21:28:47.345315441Z" level=info msg="TearDown network for sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\" successfully" Jan 13 21:28:47.345347 containerd[1471]: time="2025-01-13T21:28:47.345351861Z" level=info msg="StopPodSandbox for \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\" returns successfully" Jan 13 21:28:47.347502 containerd[1471]: time="2025-01-13T21:28:47.346040184Z" level=info msg="RemovePodSandbox for \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\"" Jan 13 21:28:47.347502 containerd[1471]: time="2025-01-13T21:28:47.346163453Z" level=info msg="Forcibly stopping sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\"" Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.398 [WARNING][5205] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b1bd5e1f-f801-477d-8fd5-44146cbed3de", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-5c0b66c99d78aee0761c.c.flatcar-212911.internal", ContainerID:"4f68c77eb22f8a7dacb9920c4ec7415d04503ec91e570b4f824b0269d516cc4c", Pod:"coredns-6f6b679f8f-d4lrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie75c644dc4c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.398 [INFO][5205] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.398 [INFO][5205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" iface="eth0" netns="" Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.398 [INFO][5205] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.398 [INFO][5205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.423 [INFO][5211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" HandleID="k8s-pod-network.9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.423 [INFO][5211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.423 [INFO][5211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.433 [WARNING][5211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" HandleID="k8s-pod-network.9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.433 [INFO][5211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" HandleID="k8s-pod-network.9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Workload="ci--4081--3--0--5c0b66c99d78aee0761c.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--d4lrn-eth0" Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.435 [INFO][5211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:47.437727 containerd[1471]: 2025-01-13 21:28:47.436 [INFO][5205] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32" Jan 13 21:28:47.438691 containerd[1471]: time="2025-01-13T21:28:47.437788065Z" level=info msg="TearDown network for sandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\" successfully" Jan 13 21:28:47.442869 containerd[1471]: time="2025-01-13T21:28:47.442819707Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:47.443208 containerd[1471]: time="2025-01-13T21:28:47.442911644Z" level=info msg="RemovePodSandbox \"9d72216119003736f2813a87fd963eaa09d4b8f13e11fbfa573b54a01833ab32\" returns successfully" Jan 13 21:28:48.108880 systemd[1]: Started sshd@11-10.128.0.96:22-147.75.109.163:33340.service - OpenSSH per-connection server daemon (147.75.109.163:33340). Jan 13 21:28:48.409919 sshd[5218]: Accepted publickey for core from 147.75.109.163 port 33340 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:28:48.411752 sshd[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:48.418121 systemd-logind[1454]: New session 12 of user core. Jan 13 21:28:48.426251 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:28:48.700496 sshd[5218]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:48.706856 systemd[1]: sshd@11-10.128.0.96:22-147.75.109.163:33340.service: Deactivated successfully. Jan 13 21:28:48.709966 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:28:48.712253 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:28:48.713797 systemd-logind[1454]: Removed session 12. Jan 13 21:28:48.759449 systemd[1]: Started sshd@12-10.128.0.96:22-147.75.109.163:33350.service - OpenSSH per-connection server daemon (147.75.109.163:33350). Jan 13 21:28:49.053027 sshd[5232]: Accepted publickey for core from 147.75.109.163 port 33350 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:28:49.055547 sshd[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:49.062003 systemd-logind[1454]: New session 13 of user core. Jan 13 21:28:49.067281 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:28:49.390698 sshd[5232]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:49.397101 systemd[1]: sshd@12-10.128.0.96:22-147.75.109.163:33350.service: Deactivated successfully. Jan 13 21:28:49.399560 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:28:49.400538 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:28:49.401906 systemd-logind[1454]: Removed session 13. Jan 13 21:28:49.447763 systemd[1]: Started sshd@13-10.128.0.96:22-147.75.109.163:33358.service - OpenSSH per-connection server daemon (147.75.109.163:33358). Jan 13 21:28:49.734303 sshd[5242]: Accepted publickey for core from 147.75.109.163 port 33358 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:28:49.736242 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:49.742661 systemd-logind[1454]: New session 14 of user core. Jan 13 21:28:49.747238 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:28:50.028162 sshd[5242]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:50.033128 systemd[1]: sshd@13-10.128.0.96:22-147.75.109.163:33358.service: Deactivated successfully. Jan 13 21:28:50.035798 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:28:50.038197 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:28:50.039909 systemd-logind[1454]: Removed session 14. Jan 13 21:28:52.529200 kubelet[2579]: I0113 21:28:52.529068 2579 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:28:55.085794 systemd[1]: Started sshd@14-10.128.0.96:22-147.75.109.163:33360.service - OpenSSH per-connection server daemon (147.75.109.163:33360). Jan 13 21:28:55.380964 sshd[5262]: Accepted publickey for core from 147.75.109.163 port 33360 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:28:55.382898 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:55.389500 systemd-logind[1454]: New session 15 of user core. Jan 13 21:28:55.394270 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:28:55.667029 sshd[5262]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:55.672953 systemd[1]: sshd@14-10.128.0.96:22-147.75.109.163:33360.service: Deactivated successfully. Jan 13 21:28:55.676330 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:28:55.678697 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:28:55.680665 systemd-logind[1454]: Removed session 15. Jan 13 21:29:00.723507 systemd[1]: Started sshd@15-10.128.0.96:22-147.75.109.163:44576.service - OpenSSH per-connection server daemon (147.75.109.163:44576). Jan 13 21:29:01.027285 sshd[5322]: Accepted publickey for core from 147.75.109.163 port 44576 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:29:01.029386 sshd[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:01.036243 systemd-logind[1454]: New session 16 of user core. Jan 13 21:29:01.042307 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:29:01.332229 sshd[5322]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:01.337813 systemd[1]: sshd@15-10.128.0.96:22-147.75.109.163:44576.service: Deactivated successfully. Jan 13 21:29:01.340607 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:29:01.342840 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:29:01.344480 systemd-logind[1454]: Removed session 16. Jan 13 21:29:06.387818 systemd[1]: Started sshd@16-10.128.0.96:22-147.75.109.163:44592.service - OpenSSH per-connection server daemon (147.75.109.163:44592). Jan 13 21:29:06.678505 sshd[5337]: Accepted publickey for core from 147.75.109.163 port 44592 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:29:06.681146 sshd[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:06.691546 systemd-logind[1454]: New session 17 of user core. Jan 13 21:29:06.702586 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:29:07.036510 sshd[5337]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:07.044823 systemd[1]: sshd@16-10.128.0.96:22-147.75.109.163:44592.service: Deactivated successfully. Jan 13 21:29:07.045400 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:29:07.050869 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:29:07.056879 systemd-logind[1454]: Removed session 17. Jan 13 21:29:12.092755 systemd[1]: Started sshd@17-10.128.0.96:22-147.75.109.163:45506.service - OpenSSH per-connection server daemon (147.75.109.163:45506). Jan 13 21:29:12.389974 sshd[5350]: Accepted publickey for core from 147.75.109.163 port 45506 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:29:12.391671 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:12.398107 systemd-logind[1454]: New session 18 of user core. Jan 13 21:29:12.409304 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:29:12.700670 sshd[5350]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:12.705166 systemd[1]: sshd@17-10.128.0.96:22-147.75.109.163:45506.service: Deactivated successfully. Jan 13 21:29:12.707883 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:29:12.710025 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:29:12.711761 systemd-logind[1454]: Removed session 18. Jan 13 21:29:12.761457 systemd[1]: Started sshd@18-10.128.0.96:22-147.75.109.163:45510.service - OpenSSH per-connection server daemon (147.75.109.163:45510). Jan 13 21:29:13.061310 sshd[5363]: Accepted publickey for core from 147.75.109.163 port 45510 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:29:13.063792 sshd[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:13.071803 systemd-logind[1454]: New session 19 of user core. Jan 13 21:29:13.079729 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:29:13.482125 sshd[5363]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:13.487705 systemd[1]: sshd@18-10.128.0.96:22-147.75.109.163:45510.service: Deactivated successfully. Jan 13 21:29:13.490653 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:29:13.491834 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:29:13.493812 systemd-logind[1454]: Removed session 19. Jan 13 21:29:13.540509 systemd[1]: Started sshd@19-10.128.0.96:22-147.75.109.163:45524.service - OpenSSH per-connection server daemon (147.75.109.163:45524). Jan 13 21:29:13.824864 sshd[5374]: Accepted publickey for core from 147.75.109.163 port 45524 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:29:13.826805 sshd[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:13.833949 systemd-logind[1454]: New session 20 of user core. Jan 13 21:29:13.839298 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:29:16.316924 sshd[5374]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:16.328757 systemd[1]: sshd@19-10.128.0.96:22-147.75.109.163:45524.service: Deactivated successfully. Jan 13 21:29:16.335253 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:29:16.339312 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:29:16.341568 systemd-logind[1454]: Removed session 20. Jan 13 21:29:16.371555 systemd[1]: Started sshd@20-10.128.0.96:22-147.75.109.163:45538.service - OpenSSH per-connection server daemon (147.75.109.163:45538). Jan 13 21:29:16.661735 sshd[5392]: Accepted publickey for core from 147.75.109.163 port 45538 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:29:16.663739 sshd[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:16.670500 systemd-logind[1454]: New session 21 of user core. Jan 13 21:29:16.676262 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:29:17.131535 sshd[5392]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:17.136345 systemd[1]: sshd@20-10.128.0.96:22-147.75.109.163:45538.service: Deactivated successfully. Jan 13 21:29:17.138911 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:29:17.141607 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:29:17.143239 systemd-logind[1454]: Removed session 21. Jan 13 21:29:17.187459 systemd[1]: Started sshd@21-10.128.0.96:22-147.75.109.163:45554.service - OpenSSH per-connection server daemon (147.75.109.163:45554). Jan 13 21:29:17.472552 sshd[5403]: Accepted publickey for core from 147.75.109.163 port 45554 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:29:17.474468 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:17.480000 systemd-logind[1454]: New session 22 of user core. Jan 13 21:29:17.487285 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:29:17.761239 sshd[5403]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:17.768310 systemd[1]: sshd@21-10.128.0.96:22-147.75.109.163:45554.service: Deactivated successfully. Jan 13 21:29:17.771079 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:29:17.772445 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:29:17.774294 systemd-logind[1454]: Removed session 22. Jan 13 21:29:22.813712 systemd[1]: Started sshd@22-10.128.0.96:22-147.75.109.163:33208.service - OpenSSH per-connection server daemon (147.75.109.163:33208). Jan 13 21:29:23.104259 sshd[5418]: Accepted publickey for core from 147.75.109.163 port 33208 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:29:23.106226 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:23.112635 systemd-logind[1454]: New session 23 of user core. Jan 13 21:29:23.117258 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:29:23.413854 sshd[5418]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:23.419039 systemd[1]: sshd@22-10.128.0.96:22-147.75.109.163:33208.service: Deactivated successfully. Jan 13 21:29:23.422157 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:29:23.424533 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:29:23.426412 systemd-logind[1454]: Removed session 23. Jan 13 21:29:24.381939 systemd[1]: run-containerd-runc-k8s.io-8a44efc75533700402bfc8fb3e5726296e017dac99b8d807c8c29f034126b875-runc.U4GJCk.mount: Deactivated successfully. Jan 13 21:29:28.471429 systemd[1]: Started sshd@23-10.128.0.96:22-147.75.109.163:41412.service - OpenSSH per-connection server daemon (147.75.109.163:41412). Jan 13 21:29:28.758468 sshd[5475]: Accepted publickey for core from 147.75.109.163 port 41412 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:29:28.760494 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:28.767161 systemd-logind[1454]: New session 24 of user core. Jan 13 21:29:28.772263 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:29:29.050397 sshd[5475]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:29.056327 systemd[1]: sshd@23-10.128.0.96:22-147.75.109.163:41412.service: Deactivated successfully. Jan 13 21:29:29.058805 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:29:29.059847 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:29:29.061422 systemd-logind[1454]: Removed session 24. Jan 13 21:29:29.952376 systemd[1]: run-containerd-runc-k8s.io-8a44efc75533700402bfc8fb3e5726296e017dac99b8d807c8c29f034126b875-runc.A0iBVJ.mount: Deactivated successfully. Jan 13 21:29:34.110497 systemd[1]: Started sshd@24-10.128.0.96:22-147.75.109.163:41424.service - OpenSSH per-connection server daemon (147.75.109.163:41424). Jan 13 21:29:34.403320 sshd[5509]: Accepted publickey for core from 147.75.109.163 port 41424 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:29:34.404919 sshd[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:34.411139 systemd-logind[1454]: New session 25 of user core. Jan 13 21:29:34.420312 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:29:34.740758 sshd[5509]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:34.745305 systemd[1]: sshd@24-10.128.0.96:22-147.75.109.163:41424.service: Deactivated successfully. Jan 13 21:29:34.748372 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:29:34.750589 systemd-logind[1454]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:29:34.752138 systemd-logind[1454]: Removed session 25.