Jan 30 13:52:36.092918 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:52:36.092962 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:52:36.092981 kernel: BIOS-provided physical RAM map: Jan 30 13:52:36.092995 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 30 13:52:36.093008 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 30 13:52:36.093022 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 30 13:52:36.093039 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 30 13:52:36.093057 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 30 13:52:36.093072 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 30 13:52:36.093086 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 30 13:52:36.093099 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 30 13:52:36.093111 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 30 13:52:36.093125 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 30 13:52:36.093138 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 30 13:52:36.093159 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 30 13:52:36.093174 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 30 13:52:36.093189 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 30 13:52:36.093205 kernel: NX (Execute Disable) protection: active Jan 30 13:52:36.093221 kernel: APIC: Static calls initialized Jan 30 13:52:36.093237 kernel: efi: EFI v2.7 by EDK II Jan 30 13:52:36.093253 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 30 13:52:36.093269 kernel: SMBIOS 2.4 present. Jan 30 13:52:36.093284 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 30 13:52:36.093300 kernel: Hypervisor detected: KVM Jan 30 13:52:36.093320 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:52:36.093341 kernel: kvm-clock: using sched offset of 12241433017 cycles Jan 30 13:52:36.093357 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:52:36.093373 kernel: tsc: Detected 2299.998 MHz processor Jan 30 13:52:36.093389 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:52:36.093405 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:52:36.093420 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 30 13:52:36.093437 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 30 13:52:36.093454 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:52:36.093475 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 30 13:52:36.093491 kernel: Using GB pages for direct mapping Jan 30 13:52:36.093507 kernel: Secure boot disabled Jan 30 13:52:36.093524 kernel: ACPI: Early table checksum verification disabled Jan 30 13:52:36.093540 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 30 13:52:36.093556 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 30 13:52:36.093572 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 30 13:52:36.093595 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 30 13:52:36.093616 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 30 13:52:36.093632 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 30 13:52:36.093651 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 30 13:52:36.093667 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 30 13:52:36.093685 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 30 13:52:36.093702 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 30 13:52:36.093724 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 30 13:52:36.093741 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 30 13:52:36.093759 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 30 13:52:36.093775 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 30 13:52:36.093792 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 30 13:52:36.093808 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 30 13:52:36.093841 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 30 13:52:36.093857 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 30 13:52:36.093874 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 30 13:52:36.093896 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 30 13:52:36.093913 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:52:36.093931 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:52:36.093947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:52:36.093965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 30 13:52:36.093982 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 30 13:52:36.093999 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 30 13:52:36.094017 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 30 13:52:36.094034 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 30 13:52:36.094055 kernel: Zone ranges: Jan 30 13:52:36.094073 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:52:36.094090 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:52:36.094107 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:52:36.094125 kernel: Movable zone start for each node Jan 30 13:52:36.094141 kernel: Early memory node ranges Jan 30 13:52:36.094159 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 30 13:52:36.094176 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 30 13:52:36.094193 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 30 13:52:36.094210 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 30 13:52:36.094231 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:52:36.094248 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 30 13:52:36.094265 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:52:36.094283 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 30 13:52:36.094300 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 30 13:52:36.094317 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 30 13:52:36.094342 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 30 13:52:36.094359 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:52:36.094377 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:52:36.094398 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:52:36.094415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:52:36.094432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:52:36.094449 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:52:36.094467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:52:36.094485 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:52:36.094503 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:52:36.094520 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:52:36.094537 kernel: Booting paravirtualized kernel on KVM Jan 30 13:52:36.094559 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:52:36.094576 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:52:36.094594 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:52:36.094608 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:52:36.094623 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:52:36.094640 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:52:36.094658 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:52:36.094678 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:52:36.094702 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:52:36.094720 kernel: random: crng init done Jan 30 13:52:36.094738 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:52:36.094757 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:52:36.094775 kernel: Fallback order for Node 0: 0 Jan 30 13:52:36.094793 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 30 13:52:36.094812 kernel: Policy zone: Normal Jan 30 13:52:36.094847 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:52:36.094865 kernel: software IO TLB: area num 2. Jan 30 13:52:36.094887 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 346940K reserved, 0K cma-reserved) Jan 30 13:52:36.094905 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:52:36.094923 kernel: Kernel/User page tables isolation: enabled Jan 30 13:52:36.094940 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:52:36.094957 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:52:36.094973 kernel: Dynamic Preempt: voluntary Jan 30 13:52:36.094991 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:52:36.095010 kernel: rcu: RCU event tracing is enabled. Jan 30 13:52:36.095045 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:52:36.095064 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:52:36.095082 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:52:36.095103 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:52:36.095121 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:52:36.095138 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:52:36.095157 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:52:36.095175 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:52:36.095193 kernel: Console: colour dummy device 80x25 Jan 30 13:52:36.095215 kernel: printk: console [ttyS0] enabled Jan 30 13:52:36.095234 kernel: ACPI: Core revision 20230628 Jan 30 13:52:36.095252 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:52:36.095270 kernel: x2apic enabled Jan 30 13:52:36.095288 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:52:36.095307 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 30 13:52:36.095326 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:52:36.095353 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 30 13:52:36.095376 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 30 13:52:36.095395 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 30 13:52:36.095414 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:52:36.095432 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:52:36.095451 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:52:36.095470 kernel: Spectre V2 : Mitigation: IBRS Jan 30 13:52:36.095489 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:52:36.095508 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:52:36.095526 kernel: RETBleed: Mitigation: IBRS Jan 30 13:52:36.095550 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:52:36.095569 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 30 13:52:36.095588 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:52:36.095608 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:52:36.095627 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:52:36.095646 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:52:36.095665 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:52:36.095683 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:52:36.095700 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:52:36.095723 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:52:36.095743 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:52:36.095762 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:52:36.095781 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:52:36.095799 kernel: landlock: Up and running. Jan 30 13:52:36.095815 kernel: SELinux: Initializing. Jan 30 13:52:36.095848 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:52:36.095868 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:52:36.095887 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 30 13:52:36.095912 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:52:36.095931 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:52:36.095972 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:52:36.095993 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 30 13:52:36.096012 kernel: signal: max sigframe size: 1776 Jan 30 13:52:36.096031 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:52:36.096051 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:52:36.096069 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:52:36.096087 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:52:36.096111 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:52:36.096131 kernel: .... node #0, CPUs: #1 Jan 30 13:52:36.096151 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:52:36.096172 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:52:36.096192 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:52:36.096211 kernel: smpboot: Max logical packages: 1 Jan 30 13:52:36.096231 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 30 13:52:36.096250 kernel: devtmpfs: initialized Jan 30 13:52:36.096273 kernel: x86/mm: Memory block size: 128MB Jan 30 13:52:36.096292 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 30 13:52:36.096312 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:52:36.096337 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:52:36.096356 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:52:36.096375 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:52:36.096394 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:52:36.096414 kernel: audit: type=2000 audit(1738245154.666:1): state=initialized audit_enabled=0 res=1 Jan 30 13:52:36.096432 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:52:36.096456 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:52:36.096475 kernel: cpuidle: using governor menu Jan 30 13:52:36.096494 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:52:36.096512 kernel: dca service started, version 1.12.1 Jan 30 13:52:36.096531 kernel: PCI: Using configuration type 1 for base access Jan 30 13:52:36.096550 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:52:36.096568 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:52:36.096587 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:52:36.096606 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:52:36.096628 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:52:36.096647 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:52:36.096666 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:52:36.096685 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:52:36.096703 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:52:36.096719 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:52:36.096737 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:52:36.096757 kernel: ACPI: Interpreter enabled Jan 30 13:52:36.096776 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:52:36.096799 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:52:36.096844 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:52:36.096862 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:52:36.096878 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:52:36.096892 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:52:36.097146 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:52:36.097377 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:52:36.097564 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:52:36.097594 kernel: PCI host bridge to bus 0000:00 Jan 30 13:52:36.097776 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:52:36.097964 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:52:36.098132 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:52:36.098299 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 30 13:52:36.098474 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:52:36.098686 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:52:36.098929 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 30 13:52:36.099130 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:52:36.099311 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:52:36.099522 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 30 13:52:36.099711 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 30 13:52:36.099912 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 30 13:52:36.100106 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:52:36.100288 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 30 13:52:36.100485 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 30 13:52:36.100679 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:52:36.100896 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 30 13:52:36.101086 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 30 13:52:36.101118 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:52:36.101137 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:52:36.101157 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:52:36.101177 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:52:36.101197 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:52:36.101217 kernel: iommu: Default domain type: Translated Jan 30 13:52:36.101237 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:52:36.101256 kernel: efivars: Registered efivars operations Jan 30 13:52:36.101277 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:52:36.101297 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:52:36.101320 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 30 13:52:36.101348 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 30 13:52:36.101365 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 30 13:52:36.101385 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 30 13:52:36.101404 kernel: vgaarb: loaded Jan 30 13:52:36.101424 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:52:36.101444 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:52:36.101463 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:52:36.101487 kernel: pnp: PnP ACPI init Jan 30 13:52:36.101507 kernel: pnp: PnP ACPI: found 7 devices Jan 30 13:52:36.101528 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:52:36.101548 kernel: NET: Registered PF_INET protocol family Jan 30 13:52:36.101567 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:52:36.101587 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:52:36.101607 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:52:36.101626 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:52:36.101646 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:52:36.101670 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:52:36.101690 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:52:36.101710 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:52:36.101730 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:52:36.101750 kernel: NET: Registered PF_XDP protocol family Jan 30 13:52:36.101985 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:52:36.102157 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:52:36.102326 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:52:36.102509 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 30 13:52:36.102701 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:52:36.102727 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:52:36.102748 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:52:36.102767 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 30 13:52:36.102787 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:52:36.102807 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:52:36.102840 kernel: clocksource: Switched to clocksource tsc Jan 30 13:52:36.102863 kernel: Initialise system trusted keyrings Jan 30 13:52:36.102890 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:52:36.102905 kernel: Key type asymmetric registered Jan 30 13:52:36.102920 kernel: Asymmetric key parser 'x509' registered Jan 30 13:52:36.102935 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:52:36.102952 kernel: io scheduler mq-deadline registered Jan 30 13:52:36.102968 kernel: io scheduler kyber registered Jan 30 13:52:36.102984 kernel: io scheduler bfq registered Jan 30 13:52:36.103003 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:52:36.103029 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:52:36.103233 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 30 13:52:36.103258 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 30 13:52:36.103458 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 30 13:52:36.103483 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:52:36.103668 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 30 13:52:36.103692 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:52:36.103711 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:52:36.103731 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:52:36.103755 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 30 13:52:36.103774 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 30 13:52:36.104035 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 30 13:52:36.104063 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:52:36.104082 kernel: i8042: Warning: Keylock active Jan 30 13:52:36.104102 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:52:36.104121 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:52:36.104306 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:52:36.104496 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:52:36.104668 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:52:35 UTC (1738245155) Jan 30 13:52:36.104875 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:52:36.104899 kernel: intel_pstate: CPU model not supported Jan 30 13:52:36.104919 kernel: pstore: Using crash dump compression: deflate Jan 30 13:52:36.104934 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:52:36.104954 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:52:36.104977 kernel: Segment Routing with IPv6 Jan 30 13:52:36.105008 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:52:36.105030 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:52:36.105047 kernel: Key type dns_resolver registered Jan 30 13:52:36.105066 kernel: IPI shorthand broadcast: enabled Jan 30 13:52:36.105083 kernel: sched_clock: Marking stable (867004844, 161774356)->(1054929593, -26150393) Jan 30 13:52:36.105103 kernel: registered taskstats version 1 Jan 30 13:52:36.105122 kernel: Loading compiled-in X.509 certificates Jan 30 13:52:36.105141 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:52:36.105160 kernel: Key type .fscrypt registered Jan 30 13:52:36.105183 kernel: Key type fscrypt-provisioning registered Jan 30 13:52:36.105202 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:52:36.105221 kernel: ima: No architecture policies found Jan 30 13:52:36.105240 kernel: clk: Disabling unused clocks Jan 30 13:52:36.105259 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:52:36.105278 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:52:36.105296 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:52:36.105315 kernel: Run /init as init process Jan 30 13:52:36.105343 kernel: with arguments: Jan 30 13:52:36.105366 kernel: /init Jan 30 13:52:36.105385 kernel: with environment: Jan 30 13:52:36.105403 kernel: HOME=/ Jan 30 13:52:36.105421 kernel: TERM=linux Jan 30 13:52:36.105440 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:52:36.105459 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:52:36.105483 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:52:36.105510 systemd[1]: Detected virtualization google. Jan 30 13:52:36.105531 systemd[1]: Detected architecture x86-64. Jan 30 13:52:36.105550 systemd[1]: Running in initrd. Jan 30 13:52:36.105569 systemd[1]: No hostname configured, using default hostname. Jan 30 13:52:36.105589 systemd[1]: Hostname set to . Jan 30 13:52:36.105610 systemd[1]: Initializing machine ID from random generator. Jan 30 13:52:36.105629 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:52:36.105650 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:52:36.105673 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:52:36.105695 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:52:36.105715 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:52:36.105735 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:52:36.105755 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:52:36.105778 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:52:36.105799 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:52:36.105848 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:52:36.105870 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:52:36.105911 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:52:36.105935 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:52:36.105956 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:52:36.105976 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:52:36.106001 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:52:36.106022 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:52:36.106044 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:52:36.106065 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:52:36.106085 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:52:36.106107 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:52:36.106128 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:52:36.106149 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:52:36.106169 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:52:36.106194 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:52:36.106215 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:52:36.106236 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:52:36.106257 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:52:36.106278 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:52:36.106299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:52:36.106320 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:52:36.106385 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 13:52:36.106435 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:52:36.106457 systemd-journald[183]: Journal started Jan 30 13:52:36.106501 systemd-journald[183]: Runtime Journal (/run/log/journal/cb477e0b78f64137986a3544cbc71fd2) is 8.0M, max 148.7M, 140.7M free. Jan 30 13:52:36.110839 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:52:36.114171 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:52:36.117028 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 13:52:36.127978 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:52:36.138533 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:52:36.146261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:36.153159 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:52:36.168022 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:52:36.171087 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 13:52:36.175047 kernel: Bridge firewalling registered Jan 30 13:52:36.171718 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:52:36.186133 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:52:36.187491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:52:36.188961 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:52:36.203265 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:52:36.211473 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:36.221284 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:52:36.221796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:52:36.234132 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:52:36.239700 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:52:36.269841 dracut-cmdline[216]: dracut-dracut-053 Jan 30 13:52:36.274356 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:52:36.299557 systemd-resolved[217]: Positive Trust Anchors: Jan 30 13:52:36.300135 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:52:36.300205 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:52:36.307106 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 30 13:52:36.310462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:52:36.324446 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:52:36.377874 kernel: SCSI subsystem initialized Jan 30 13:52:36.388876 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:52:36.400862 kernel: iscsi: registered transport (tcp) Jan 30 13:52:36.423860 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:52:36.423943 kernel: QLogic iSCSI HBA Driver Jan 30 13:52:36.475767 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:52:36.487078 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:52:36.528865 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:52:36.528960 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:52:36.528990 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:52:36.575879 kernel: raid6: avx2x4 gen() 17960 MB/s Jan 30 13:52:36.592861 kernel: raid6: avx2x2 gen() 18013 MB/s Jan 30 13:52:36.610300 kernel: raid6: avx2x1 gen() 13889 MB/s Jan 30 13:52:36.610349 kernel: raid6: using algorithm avx2x2 gen() 18013 MB/s Jan 30 13:52:36.628228 kernel: raid6: .... xor() 17520 MB/s, rmw enabled Jan 30 13:52:36.628295 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:52:36.651865 kernel: xor: automatically using best checksumming function avx Jan 30 13:52:36.831861 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:52:36.845217 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:52:36.854046 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:52:36.887926 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 30 13:52:36.895146 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:52:36.903061 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:52:36.934863 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jan 30 13:52:36.972506 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:52:36.982069 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:52:37.075373 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:52:37.085099 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:52:37.123650 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:52:37.127603 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:52:37.137937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:52:37.141922 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:52:37.155916 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:52:37.188870 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:52:37.192212 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:52:37.281119 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:52:37.281192 kernel: AES CTR mode by8 optimization enabled Jan 30 13:52:37.292591 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:52:37.335811 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:52:37.336189 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 30 13:52:37.292818 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:37.311119 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:52:37.313390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:52:37.313653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:37.326373 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:52:37.338324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:52:37.387860 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 30 13:52:37.402961 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 30 13:52:37.403256 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 30 13:52:37.403495 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 30 13:52:37.403738 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:52:37.404020 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:52:37.404050 kernel: GPT:17805311 != 25165823 Jan 30 13:52:37.404074 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:52:37.404097 kernel: GPT:17805311 != 25165823 Jan 30 13:52:37.404121 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:52:37.404146 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:52:37.404182 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 30 13:52:37.388958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:37.401593 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:52:37.442218 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:37.461847 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Jan 30 13:52:37.469846 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) Jan 30 13:52:37.481073 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 30 13:52:37.500957 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 30 13:52:37.511885 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 13:52:37.518227 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 30 13:52:37.518377 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 30 13:52:37.533051 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:52:37.546434 disk-uuid[549]: Primary Header is updated. Jan 30 13:52:37.546434 disk-uuid[549]: Secondary Entries is updated. Jan 30 13:52:37.546434 disk-uuid[549]: Secondary Header is updated. Jan 30 13:52:37.559846 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:52:37.580871 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:52:37.603864 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:52:38.594981 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:52:38.595060 disk-uuid[550]: The operation has completed successfully. Jan 30 13:52:38.666625 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:52:38.666790 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:52:38.706042 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:52:38.736146 sh[567]: Success Jan 30 13:52:38.759942 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:52:38.845628 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:52:38.870979 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:52:38.875475 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:52:38.927611 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:52:38.927723 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:38.927750 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:52:38.943975 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:52:38.944040 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:52:38.983883 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:52:38.992511 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:52:38.993578 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:52:38.999126 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:52:39.038063 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:52:39.077683 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:39.077775 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:39.077801 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:52:39.100947 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:52:39.101043 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:52:39.117505 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:52:39.134287 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:39.140919 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:52:39.157098 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:52:39.242998 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:52:39.272119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:52:39.335028 ignition[686]: Ignition 2.19.0 Jan 30 13:52:39.335045 ignition[686]: Stage: fetch-offline Jan 30 13:52:39.338535 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:52:39.335104 ignition[686]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:39.339017 systemd-networkd[750]: lo: Link UP Jan 30 13:52:39.335121 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:39.339026 systemd-networkd[750]: lo: Gained carrier Jan 30 13:52:39.335282 ignition[686]: parsed url from cmdline: "" Jan 30 13:52:39.340644 systemd-networkd[750]: Enumeration completed Jan 30 13:52:39.335289 ignition[686]: no config URL provided Jan 30 13:52:39.341271 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:52:39.335298 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:52:39.341278 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:52:39.335313 ignition[686]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:52:39.343423 systemd-networkd[750]: eth0: Link UP Jan 30 13:52:39.335325 ignition[686]: failed to fetch config: resource requires networking Jan 30 13:52:39.343430 systemd-networkd[750]: eth0: Gained carrier Jan 30 13:52:39.335608 ignition[686]: Ignition finished successfully Jan 30 13:52:39.343442 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:52:39.449383 ignition[759]: Ignition 2.19.0 Jan 30 13:52:39.350324 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:52:39.449395 ignition[759]: Stage: fetch Jan 30 13:52:39.354909 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.23/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 13:52:39.449670 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:39.380263 systemd[1]: Reached target network.target - Network. Jan 30 13:52:39.449687 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:39.401061 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:52:39.449887 ignition[759]: parsed url from cmdline: "" Jan 30 13:52:39.462178 unknown[759]: fetched base config from "system" Jan 30 13:52:39.449894 ignition[759]: no config URL provided Jan 30 13:52:39.462200 unknown[759]: fetched base config from "system" Jan 30 13:52:39.449903 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:52:39.462234 unknown[759]: fetched user config from "gcp" Jan 30 13:52:39.449919 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:52:39.465183 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:52:39.449950 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 30 13:52:39.488088 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:52:39.454736 ignition[759]: GET result: OK Jan 30 13:52:39.515369 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:52:39.454876 ignition[759]: parsing config with SHA512: 927a17651e082fdbcf14fecf53ae4d134bd4ef6ce807539c8e8edef218b963a28165ab9ff95a6c3e7fd859a9854f2799bc8fd5bd9b277d232678375fea7ab994 Jan 30 13:52:39.532177 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:52:39.463184 ignition[759]: fetch: fetch complete Jan 30 13:52:39.588479 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:52:39.463200 ignition[759]: fetch: fetch passed Jan 30 13:52:39.596580 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:52:39.463275 ignition[759]: Ignition finished successfully Jan 30 13:52:39.627139 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:52:39.512297 ignition[766]: Ignition 2.19.0 Jan 30 13:52:39.635209 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:52:39.512308 ignition[766]: Stage: kargs Jan 30 13:52:39.653235 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:52:39.512522 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:39.680116 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:52:39.512533 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:39.703049 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:52:39.513564 ignition[766]: kargs: kargs passed Jan 30 13:52:39.513618 ignition[766]: Ignition finished successfully Jan 30 13:52:39.564777 ignition[771]: Ignition 2.19.0 Jan 30 13:52:39.564787 ignition[771]: Stage: disks Jan 30 13:52:39.565091 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:39.565110 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:39.566484 ignition[771]: disks: disks passed Jan 30 13:52:39.566538 ignition[771]: Ignition finished successfully Jan 30 13:52:39.758855 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:52:39.933961 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:52:39.939103 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:52:40.096966 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:52:40.097903 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:52:40.098775 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:52:40.129991 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:52:40.158847 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 30 13:52:40.177154 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:40.177249 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:40.177276 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:52:40.183146 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:52:40.213115 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:52:40.213172 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:52:40.192454 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:52:40.192526 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:52:40.192559 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:52:40.226117 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:52:40.247212 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:52:40.278097 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:52:40.420843 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:52:40.430993 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:52:40.442338 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:52:40.452016 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:52:40.592398 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:52:40.597960 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:52:40.600968 systemd-networkd[750]: eth0: Gained IPv6LL Jan 30 13:52:40.648892 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:40.652209 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:52:40.662387 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:52:40.692536 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:52:40.703942 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:52:40.726991 ignition[900]: INFO : Ignition 2.19.0 Jan 30 13:52:40.726991 ignition[900]: INFO : Stage: mount Jan 30 13:52:40.726991 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:40.726991 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:40.726991 ignition[900]: INFO : mount: mount passed Jan 30 13:52:40.726991 ignition[900]: INFO : Ignition finished successfully Jan 30 13:52:40.845999 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (913) Jan 30 13:52:40.846059 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:40.846086 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:40.846110 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:52:40.846134 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:52:40.846158 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:52:40.725946 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:52:40.742213 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:52:40.819575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:52:40.866141 unknown[929]: wrote ssh authorized keys file for user: core Jan 30 13:52:40.883008 ignition[929]: INFO : Ignition 2.19.0 Jan 30 13:52:40.883008 ignition[929]: INFO : Stage: files Jan 30 13:52:40.883008 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:40.883008 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:40.883008 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:52:40.883008 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:52:40.883008 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:52:40.883008 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:52:40.883008 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:52:40.883008 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:52:40.883008 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:52:40.883008 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:52:43.059240 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:52:43.214855 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:52:43.467031 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:52:43.813461 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:43.813461 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:52:43.854983 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:52:43.854983 ignition[929]: INFO : files: files passed Jan 30 13:52:43.854983 ignition[929]: INFO : Ignition finished successfully Jan 30 13:52:43.820944 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:52:43.849191 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:52:43.885054 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:52:43.896499 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:52:44.062986 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:52:44.062986 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:52:43.896631 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:52:44.112119 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:52:43.941476 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:52:43.965445 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:52:43.995076 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:52:44.084518 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:52:44.084642 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:52:44.102875 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:52:44.122023 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:52:44.143180 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:52:44.150105 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:52:44.191492 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:52:44.213055 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:52:44.251194 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:52:44.273245 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:52:44.295224 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:52:44.314223 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:52:44.314421 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:52:44.347251 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:52:44.367179 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:52:44.383214 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:52:44.401167 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:52:44.423240 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:52:44.444179 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:52:44.462237 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:52:44.483180 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:52:44.503218 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:52:44.523209 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:52:44.541108 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:52:44.541332 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:52:44.572210 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:52:44.592202 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:52:44.610120 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:52:44.610325 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:52:44.629282 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:52:44.629516 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:52:44.659247 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:52:44.739002 ignition[982]: INFO : Ignition 2.19.0 Jan 30 13:52:44.739002 ignition[982]: INFO : Stage: umount Jan 30 13:52:44.739002 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:44.739002 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:44.739002 ignition[982]: INFO : umount: umount passed Jan 30 13:52:44.739002 ignition[982]: INFO : Ignition finished successfully Jan 30 13:52:44.659488 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:52:44.681296 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:52:44.681490 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:52:44.706098 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:52:44.748978 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:52:44.749253 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:52:44.774288 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:52:44.802154 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:52:44.802378 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:52:44.828277 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:52:44.828468 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:52:44.863017 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:52:44.864144 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:52:44.864267 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:52:44.879640 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:52:44.879764 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:52:44.902200 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:52:44.902328 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:52:44.924142 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:52:44.924210 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:52:44.944103 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:52:44.944193 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:52:44.962115 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:52:44.962204 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:52:44.982072 systemd[1]: Stopped target network.target - Network. Jan 30 13:52:44.998994 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:52:44.999120 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:52:45.009286 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:52:45.034996 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:52:45.039919 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:52:45.043171 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:52:45.061219 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:52:45.076248 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:52:45.076308 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:52:45.091231 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:52:45.091295 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:52:45.106214 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:52:45.106286 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:52:45.123259 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:52:45.123329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:52:45.141264 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:52:45.141347 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:52:45.175468 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:52:45.179924 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 30 13:52:45.183316 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:52:45.210535 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:52:45.210677 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:52:45.229637 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:52:45.230148 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:52:45.247509 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:52:45.247586 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:52:45.260959 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:52:45.289976 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:52:45.290109 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:52:45.309104 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:52:45.309201 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:52:45.329071 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:52:45.749854 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 13:52:45.329167 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:52:45.352097 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:52:45.352200 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:52:45.371248 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:52:45.390518 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:52:45.390715 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:52:45.416938 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:52:45.417040 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:52:45.434107 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:52:45.434185 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:52:45.463074 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:52:45.463184 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:52:45.491201 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:52:45.491415 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:52:45.521257 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:52:45.521350 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:45.556087 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:52:45.574967 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:52:45.575180 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:52:45.583228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:52:45.583293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:45.613660 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:52:45.613793 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:52:45.631592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:52:45.631720 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:52:45.653345 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:52:45.667054 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:52:45.706642 systemd[1]: Switching root. Jan 30 13:52:46.026012 systemd-journald[183]: Journal stopped Jan 30 13:52:36.092918 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:52:36.092962 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:52:36.092981 kernel: BIOS-provided physical RAM map: Jan 30 13:52:36.092995 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 30 13:52:36.093008 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 30 13:52:36.093022 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 30 13:52:36.093039 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 30 13:52:36.093057 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 30 13:52:36.093072 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 30 13:52:36.093086 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 30 13:52:36.093099 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 30 13:52:36.093111 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 30 13:52:36.093125 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 30 13:52:36.093138 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 30 13:52:36.093159 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 30 13:52:36.093174 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 30 13:52:36.093189 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 30 13:52:36.093205 kernel: NX (Execute Disable) protection: active Jan 30 13:52:36.093221 kernel: APIC: Static calls initialized Jan 30 13:52:36.093237 kernel: efi: EFI v2.7 by EDK II Jan 30 13:52:36.093253 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 30 13:52:36.093269 kernel: SMBIOS 2.4 present. Jan 30 13:52:36.093284 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 30 13:52:36.093300 kernel: Hypervisor detected: KVM Jan 30 13:52:36.093320 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:52:36.093341 kernel: kvm-clock: using sched offset of 12241433017 cycles Jan 30 13:52:36.093357 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:52:36.093373 kernel: tsc: Detected 2299.998 MHz processor Jan 30 13:52:36.093389 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:52:36.093405 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:52:36.093420 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 30 13:52:36.093437 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 30 13:52:36.093454 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:52:36.093475 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 30 13:52:36.093491 kernel: Using GB pages for direct mapping Jan 30 13:52:36.093507 kernel: Secure boot disabled Jan 30 13:52:36.093524 kernel: ACPI: Early table checksum verification disabled Jan 30 13:52:36.093540 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 30 13:52:36.093556 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 30 13:52:36.093572 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 30 13:52:36.093595 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 30 13:52:36.093616 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 30 13:52:36.093632 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 30 13:52:36.093651 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 30 13:52:36.093667 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 30 13:52:36.093685 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 30 13:52:36.093702 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 30 13:52:36.093724 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 30 13:52:36.093741 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 30 13:52:36.093759 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 30 13:52:36.093775 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 30 13:52:36.093792 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 30 13:52:36.093808 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 30 13:52:36.093841 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 30 13:52:36.093857 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 30 13:52:36.093874 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 30 13:52:36.093896 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 30 13:52:36.093913 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:52:36.093931 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:52:36.093947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:52:36.093965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 30 13:52:36.093982 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 30 13:52:36.093999 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 30 13:52:36.094017 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 30 13:52:36.094034 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 30 13:52:36.094055 kernel: Zone ranges: Jan 30 13:52:36.094073 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:52:36.094090 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:52:36.094107 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:52:36.094125 kernel: Movable zone start for each node Jan 30 13:52:36.094141 kernel: Early memory node ranges Jan 30 13:52:36.094159 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 30 13:52:36.094176 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 30 13:52:36.094193 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 30 13:52:36.094210 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 30 13:52:36.094231 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:52:36.094248 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 30 13:52:36.094265 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:52:36.094283 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 30 13:52:36.094300 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 30 13:52:36.094317 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 30 13:52:36.094342 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 30 13:52:36.094359 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:52:36.094377 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:52:36.094398 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:52:36.094415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:52:36.094432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:52:36.094449 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:52:36.094467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:52:36.094485 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:52:36.094503 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:52:36.094520 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:52:36.094537 kernel: Booting paravirtualized kernel on KVM Jan 30 13:52:36.094559 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:52:36.094576 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:52:36.094594 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:52:36.094608 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:52:36.094623 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:52:36.094640 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:52:36.094658 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:52:36.094678 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:52:36.094702 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:52:36.094720 kernel: random: crng init done Jan 30 13:52:36.094738 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:52:36.094757 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:52:36.094775 kernel: Fallback order for Node 0: 0 Jan 30 13:52:36.094793 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 30 13:52:36.094812 kernel: Policy zone: Normal Jan 30 13:52:36.094847 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:52:36.094865 kernel: software IO TLB: area num 2. Jan 30 13:52:36.094887 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 346940K reserved, 0K cma-reserved) Jan 30 13:52:36.094905 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:52:36.094923 kernel: Kernel/User page tables isolation: enabled Jan 30 13:52:36.094940 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:52:36.094957 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:52:36.094973 kernel: Dynamic Preempt: voluntary Jan 30 13:52:36.094991 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:52:36.095010 kernel: rcu: RCU event tracing is enabled. Jan 30 13:52:36.095045 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:52:36.095064 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:52:36.095082 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:52:36.095103 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:52:36.095121 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:52:36.095138 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:52:36.095157 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:52:36.095175 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:52:36.095193 kernel: Console: colour dummy device 80x25 Jan 30 13:52:36.095215 kernel: printk: console [ttyS0] enabled Jan 30 13:52:36.095234 kernel: ACPI: Core revision 20230628 Jan 30 13:52:36.095252 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:52:36.095270 kernel: x2apic enabled Jan 30 13:52:36.095288 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:52:36.095307 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 30 13:52:36.095326 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:52:36.095353 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 30 13:52:36.095376 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 30 13:52:36.095395 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 30 13:52:36.095414 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:52:36.095432 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:52:36.095451 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:52:36.095470 kernel: Spectre V2 : Mitigation: IBRS Jan 30 13:52:36.095489 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:52:36.095508 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:52:36.095526 kernel: RETBleed: Mitigation: IBRS Jan 30 13:52:36.095550 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:52:36.095569 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 30 13:52:36.095588 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:52:36.095608 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:52:36.095627 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:52:36.095646 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:52:36.095665 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:52:36.095683 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:52:36.095700 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:52:36.095723 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:52:36.095743 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:52:36.095762 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:52:36.095781 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:52:36.095799 kernel: landlock: Up and running. Jan 30 13:52:36.095815 kernel: SELinux: Initializing. Jan 30 13:52:36.095848 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:52:36.095868 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:52:36.095887 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 30 13:52:36.095912 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:52:36.095931 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:52:36.095972 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:52:36.095993 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 30 13:52:36.096012 kernel: signal: max sigframe size: 1776 Jan 30 13:52:36.096031 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:52:36.096051 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:52:36.096069 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:52:36.096087 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:52:36.096111 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:52:36.096131 kernel: .... node #0, CPUs: #1 Jan 30 13:52:36.096151 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:52:36.096172 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:52:36.096192 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:52:36.096211 kernel: smpboot: Max logical packages: 1 Jan 30 13:52:36.096231 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 30 13:52:36.096250 kernel: devtmpfs: initialized Jan 30 13:52:36.096273 kernel: x86/mm: Memory block size: 128MB Jan 30 13:52:36.096292 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 30 13:52:36.096312 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:52:36.096337 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:52:36.096356 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:52:36.096375 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:52:36.096394 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:52:36.096414 kernel: audit: type=2000 audit(1738245154.666:1): state=initialized audit_enabled=0 res=1 Jan 30 13:52:36.096432 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:52:36.096456 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:52:36.096475 kernel: cpuidle: using governor menu Jan 30 13:52:36.096494 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:52:36.096512 kernel: dca service started, version 1.12.1 Jan 30 13:52:36.096531 kernel: PCI: Using configuration type 1 for base access Jan 30 13:52:36.096550 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:52:36.096568 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:52:36.096587 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:52:36.096606 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:52:36.096628 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:52:36.096647 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:52:36.096666 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:52:36.096685 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:52:36.096703 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:52:36.096719 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:52:36.096737 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:52:36.096757 kernel: ACPI: Interpreter enabled Jan 30 13:52:36.096776 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:52:36.096799 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:52:36.096844 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:52:36.096862 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:52:36.096878 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:52:36.096892 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:52:36.097146 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:52:36.097377 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:52:36.097564 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:52:36.097594 kernel: PCI host bridge to bus 0000:00 Jan 30 13:52:36.097776 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:52:36.097964 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:52:36.098132 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:52:36.098299 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 30 13:52:36.098474 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:52:36.098686 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:52:36.098929 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 30 13:52:36.099130 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:52:36.099311 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:52:36.099522 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 30 13:52:36.099711 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 30 13:52:36.099912 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 30 13:52:36.100106 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:52:36.100288 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 30 13:52:36.100485 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 30 13:52:36.100679 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:52:36.100896 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 30 13:52:36.101086 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 30 13:52:36.101118 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:52:36.101137 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:52:36.101157 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:52:36.101177 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:52:36.101197 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:52:36.101217 kernel: iommu: Default domain type: Translated Jan 30 13:52:36.101237 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:52:36.101256 kernel: efivars: Registered efivars operations Jan 30 13:52:36.101277 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:52:36.101297 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:52:36.101320 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 30 13:52:36.101348 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 30 13:52:36.101365 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 30 13:52:36.101385 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 30 13:52:36.101404 kernel: vgaarb: loaded Jan 30 13:52:36.101424 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:52:36.101444 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:52:36.101463 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:52:36.101487 kernel: pnp: PnP ACPI init Jan 30 13:52:36.101507 kernel: pnp: PnP ACPI: found 7 devices Jan 30 13:52:36.101528 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:52:36.101548 kernel: NET: Registered PF_INET protocol family Jan 30 13:52:36.101567 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:52:36.101587 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:52:36.101607 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:52:36.101626 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:52:36.101646 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:52:36.101670 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:52:36.101690 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:52:36.101710 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:52:36.101730 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:52:36.101750 kernel: NET: Registered PF_XDP protocol family Jan 30 13:52:36.101985 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:52:36.102157 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:52:36.102326 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:52:36.102509 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 30 13:52:36.102701 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:52:36.102727 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:52:36.102748 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:52:36.102767 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 30 13:52:36.102787 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:52:36.102807 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:52:36.102840 kernel: clocksource: Switched to clocksource tsc Jan 30 13:52:36.102863 kernel: Initialise system trusted keyrings Jan 30 13:52:36.102890 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:52:36.102905 kernel: Key type asymmetric registered Jan 30 13:52:36.102920 kernel: Asymmetric key parser 'x509' registered Jan 30 13:52:36.102935 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:52:36.102952 kernel: io scheduler mq-deadline registered Jan 30 13:52:36.102968 kernel: io scheduler kyber registered Jan 30 13:52:36.102984 kernel: io scheduler bfq registered Jan 30 13:52:36.103003 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:52:36.103029 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:52:36.103233 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 30 13:52:36.103258 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 30 13:52:36.103458 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 30 13:52:36.103483 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:52:36.103668 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 30 13:52:36.103692 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:52:36.103711 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:52:36.103731 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:52:36.103755 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 30 13:52:36.103774 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 30 13:52:36.104035 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 30 13:52:36.104063 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:52:36.104082 kernel: i8042: Warning: Keylock active Jan 30 13:52:36.104102 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:52:36.104121 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:52:36.104306 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:52:36.104496 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:52:36.104668 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:52:35 UTC (1738245155) Jan 30 13:52:36.104875 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:52:36.104899 kernel: intel_pstate: CPU model not supported Jan 30 13:52:36.104919 kernel: pstore: Using crash dump compression: deflate Jan 30 13:52:36.104934 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:52:36.104954 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:52:36.104977 kernel: Segment Routing with IPv6 Jan 30 13:52:36.105008 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:52:36.105030 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:52:36.105047 kernel: Key type dns_resolver registered Jan 30 13:52:36.105066 kernel: IPI shorthand broadcast: enabled Jan 30 13:52:36.105083 kernel: sched_clock: Marking stable (867004844, 161774356)->(1054929593, -26150393) Jan 30 13:52:36.105103 kernel: registered taskstats version 1 Jan 30 13:52:36.105122 kernel: Loading compiled-in X.509 certificates Jan 30 13:52:36.105141 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:52:36.105160 kernel: Key type .fscrypt registered Jan 30 13:52:36.105183 kernel: Key type fscrypt-provisioning registered Jan 30 13:52:36.105202 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:52:36.105221 kernel: ima: No architecture policies found Jan 30 13:52:36.105240 kernel: clk: Disabling unused clocks Jan 30 13:52:36.105259 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:52:36.105278 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:52:36.105296 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:52:36.105315 kernel: Run /init as init process Jan 30 13:52:36.105343 kernel: with arguments: Jan 30 13:52:36.105366 kernel: /init Jan 30 13:52:36.105385 kernel: with environment: Jan 30 13:52:36.105403 kernel: HOME=/ Jan 30 13:52:36.105421 kernel: TERM=linux Jan 30 13:52:36.105440 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:52:36.105459 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:52:36.105483 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:52:36.105510 systemd[1]: Detected virtualization google. Jan 30 13:52:36.105531 systemd[1]: Detected architecture x86-64. Jan 30 13:52:36.105550 systemd[1]: Running in initrd. Jan 30 13:52:36.105569 systemd[1]: No hostname configured, using default hostname. Jan 30 13:52:36.105589 systemd[1]: Hostname set to . Jan 30 13:52:36.105610 systemd[1]: Initializing machine ID from random generator. Jan 30 13:52:36.105629 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:52:36.105650 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:52:36.105673 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:52:36.105695 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:52:36.105715 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:52:36.105735 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:52:36.105755 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:52:36.105778 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:52:36.105799 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:52:36.105848 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:52:36.105870 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:52:36.105911 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:52:36.105935 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:52:36.105956 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:52:36.105976 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:52:36.106001 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:52:36.106022 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:52:36.106044 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:52:36.106065 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:52:36.106085 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:52:36.106107 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:52:36.106128 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:52:36.106149 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:52:36.106169 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:52:36.106194 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:52:36.106215 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:52:36.106236 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:52:36.106257 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:52:36.106278 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:52:36.106299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:52:36.106320 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:52:36.106385 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 13:52:36.106435 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:52:36.106457 systemd-journald[183]: Journal started Jan 30 13:52:36.106501 systemd-journald[183]: Runtime Journal (/run/log/journal/cb477e0b78f64137986a3544cbc71fd2) is 8.0M, max 148.7M, 140.7M free. Jan 30 13:52:36.110839 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:52:36.114171 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:52:36.117028 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 13:52:36.127978 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:52:36.138533 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:52:36.146261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:36.153159 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:52:36.168022 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:52:36.171087 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 13:52:36.175047 kernel: Bridge firewalling registered Jan 30 13:52:36.171718 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:52:36.186133 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:52:36.187491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:52:36.188961 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:52:36.203265 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:52:36.211473 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:36.221284 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:52:36.221796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:52:36.234132 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:52:36.239700 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:52:36.269841 dracut-cmdline[216]: dracut-dracut-053 Jan 30 13:52:36.274356 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:52:36.299557 systemd-resolved[217]: Positive Trust Anchors: Jan 30 13:52:36.300135 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:52:36.300205 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:52:36.307106 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 30 13:52:36.310462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:52:36.324446 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:52:36.377874 kernel: SCSI subsystem initialized Jan 30 13:52:36.388876 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:52:36.400862 kernel: iscsi: registered transport (tcp) Jan 30 13:52:36.423860 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:52:36.423943 kernel: QLogic iSCSI HBA Driver Jan 30 13:52:36.475767 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:52:36.487078 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:52:36.528865 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:52:36.528960 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:52:36.528990 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:52:36.575879 kernel: raid6: avx2x4 gen() 17960 MB/s Jan 30 13:52:36.592861 kernel: raid6: avx2x2 gen() 18013 MB/s Jan 30 13:52:36.610300 kernel: raid6: avx2x1 gen() 13889 MB/s Jan 30 13:52:36.610349 kernel: raid6: using algorithm avx2x2 gen() 18013 MB/s Jan 30 13:52:36.628228 kernel: raid6: .... xor() 17520 MB/s, rmw enabled Jan 30 13:52:36.628295 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:52:36.651865 kernel: xor: automatically using best checksumming function avx Jan 30 13:52:36.831861 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:52:36.845217 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:52:36.854046 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:52:36.887926 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 30 13:52:36.895146 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:52:36.903061 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:52:36.934863 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jan 30 13:52:36.972506 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:52:36.982069 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:52:37.075373 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:52:37.085099 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:52:37.123650 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:52:37.127603 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:52:37.137937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:52:37.141922 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:52:37.155916 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:52:37.188870 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:52:37.192212 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:52:37.281119 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:52:37.281192 kernel: AES CTR mode by8 optimization enabled Jan 30 13:52:37.292591 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:52:37.335811 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:52:37.336189 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 30 13:52:37.292818 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:37.311119 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:52:37.313390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:52:37.313653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:37.326373 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:52:37.338324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:52:37.387860 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 30 13:52:37.402961 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 30 13:52:37.403256 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 30 13:52:37.403495 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 30 13:52:37.403738 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:52:37.404020 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:52:37.404050 kernel: GPT:17805311 != 25165823 Jan 30 13:52:37.404074 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:52:37.404097 kernel: GPT:17805311 != 25165823 Jan 30 13:52:37.404121 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:52:37.404146 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:52:37.404182 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 30 13:52:37.388958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:37.401593 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:52:37.442218 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:37.461847 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Jan 30 13:52:37.469846 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) Jan 30 13:52:37.481073 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 30 13:52:37.500957 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 30 13:52:37.511885 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 13:52:37.518227 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 30 13:52:37.518377 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 30 13:52:37.533051 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:52:37.546434 disk-uuid[549]: Primary Header is updated. Jan 30 13:52:37.546434 disk-uuid[549]: Secondary Entries is updated. Jan 30 13:52:37.546434 disk-uuid[549]: Secondary Header is updated. Jan 30 13:52:37.559846 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:52:37.580871 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:52:37.603864 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:52:38.594981 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:52:38.595060 disk-uuid[550]: The operation has completed successfully. Jan 30 13:52:38.666625 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:52:38.666790 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:52:38.706042 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:52:38.736146 sh[567]: Success Jan 30 13:52:38.759942 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:52:38.845628 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:52:38.870979 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:52:38.875475 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:52:38.927611 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:52:38.927723 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:38.927750 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:52:38.943975 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:52:38.944040 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:52:38.983883 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:52:38.992511 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:52:38.993578 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:52:38.999126 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:52:39.038063 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:52:39.077683 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:39.077775 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:39.077801 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:52:39.100947 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:52:39.101043 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:52:39.117505 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:52:39.134287 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:39.140919 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:52:39.157098 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:52:39.242998 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:52:39.272119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:52:39.335028 ignition[686]: Ignition 2.19.0 Jan 30 13:52:39.335045 ignition[686]: Stage: fetch-offline Jan 30 13:52:39.338535 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:52:39.335104 ignition[686]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:39.339017 systemd-networkd[750]: lo: Link UP Jan 30 13:52:39.335121 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:39.339026 systemd-networkd[750]: lo: Gained carrier Jan 30 13:52:39.335282 ignition[686]: parsed url from cmdline: "" Jan 30 13:52:39.340644 systemd-networkd[750]: Enumeration completed Jan 30 13:52:39.335289 ignition[686]: no config URL provided Jan 30 13:52:39.341271 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:52:39.335298 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:52:39.341278 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:52:39.335313 ignition[686]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:52:39.343423 systemd-networkd[750]: eth0: Link UP Jan 30 13:52:39.335325 ignition[686]: failed to fetch config: resource requires networking Jan 30 13:52:39.343430 systemd-networkd[750]: eth0: Gained carrier Jan 30 13:52:39.335608 ignition[686]: Ignition finished successfully Jan 30 13:52:39.343442 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:52:39.449383 ignition[759]: Ignition 2.19.0 Jan 30 13:52:39.350324 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:52:39.449395 ignition[759]: Stage: fetch Jan 30 13:52:39.354909 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.23/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 13:52:39.449670 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:39.380263 systemd[1]: Reached target network.target - Network. Jan 30 13:52:39.449687 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:39.401061 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:52:39.449887 ignition[759]: parsed url from cmdline: "" Jan 30 13:52:39.462178 unknown[759]: fetched base config from "system" Jan 30 13:52:39.449894 ignition[759]: no config URL provided Jan 30 13:52:39.462200 unknown[759]: fetched base config from "system" Jan 30 13:52:39.449903 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:52:39.462234 unknown[759]: fetched user config from "gcp" Jan 30 13:52:39.449919 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:52:39.465183 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:52:39.449950 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 30 13:52:39.488088 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:52:39.454736 ignition[759]: GET result: OK Jan 30 13:52:39.515369 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:52:39.454876 ignition[759]: parsing config with SHA512: 927a17651e082fdbcf14fecf53ae4d134bd4ef6ce807539c8e8edef218b963a28165ab9ff95a6c3e7fd859a9854f2799bc8fd5bd9b277d232678375fea7ab994 Jan 30 13:52:39.532177 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:52:39.463184 ignition[759]: fetch: fetch complete Jan 30 13:52:39.588479 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:52:39.463200 ignition[759]: fetch: fetch passed Jan 30 13:52:39.596580 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:52:39.463275 ignition[759]: Ignition finished successfully Jan 30 13:52:39.627139 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:52:39.512297 ignition[766]: Ignition 2.19.0 Jan 30 13:52:39.635209 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:52:39.512308 ignition[766]: Stage: kargs Jan 30 13:52:39.653235 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:52:39.512522 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:39.680116 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:52:39.512533 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:39.703049 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:52:39.513564 ignition[766]: kargs: kargs passed Jan 30 13:52:39.513618 ignition[766]: Ignition finished successfully Jan 30 13:52:39.564777 ignition[771]: Ignition 2.19.0 Jan 30 13:52:39.564787 ignition[771]: Stage: disks Jan 30 13:52:39.565091 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:39.565110 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:39.566484 ignition[771]: disks: disks passed Jan 30 13:52:39.566538 ignition[771]: Ignition finished successfully Jan 30 13:52:39.758855 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:52:39.933961 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:52:39.939103 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:52:40.096966 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:52:40.097903 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:52:40.098775 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:52:40.129991 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:52:40.158847 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 30 13:52:40.177154 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:40.177249 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:40.177276 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:52:40.183146 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:52:40.213115 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:52:40.213172 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:52:40.192454 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:52:40.192526 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:52:40.192559 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:52:40.226117 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:52:40.247212 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:52:40.278097 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:52:40.420843 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:52:40.430993 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:52:40.442338 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:52:40.452016 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:52:40.592398 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:52:40.597960 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:52:40.600968 systemd-networkd[750]: eth0: Gained IPv6LL Jan 30 13:52:40.648892 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:40.652209 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:52:40.662387 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:52:40.692536 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:52:40.703942 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:52:40.726991 ignition[900]: INFO : Ignition 2.19.0 Jan 30 13:52:40.726991 ignition[900]: INFO : Stage: mount Jan 30 13:52:40.726991 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:40.726991 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:40.726991 ignition[900]: INFO : mount: mount passed Jan 30 13:52:40.726991 ignition[900]: INFO : Ignition finished successfully Jan 30 13:52:40.845999 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (913) Jan 30 13:52:40.846059 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:40.846086 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:40.846110 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:52:40.846134 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:52:40.846158 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:52:40.725946 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:52:40.742213 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:52:40.819575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:52:40.866141 unknown[929]: wrote ssh authorized keys file for user: core Jan 30 13:52:40.883008 ignition[929]: INFO : Ignition 2.19.0 Jan 30 13:52:40.883008 ignition[929]: INFO : Stage: files Jan 30 13:52:40.883008 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:40.883008 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:40.883008 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:52:40.883008 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:52:40.883008 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:52:40.883008 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:52:40.883008 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:52:40.883008 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:52:40.883008 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:52:40.883008 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:52:43.059240 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:52:43.214855 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:43.232971 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:52:43.467031 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:52:43.813461 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:43.813461 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:52:43.854983 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:52:43.854983 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:52:43.854983 ignition[929]: INFO : files: files passed Jan 30 13:52:43.854983 ignition[929]: INFO : Ignition finished successfully Jan 30 13:52:43.820944 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:52:43.849191 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:52:43.885054 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:52:43.896499 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:52:44.062986 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:52:44.062986 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:52:43.896631 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:52:44.112119 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:52:43.941476 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:52:43.965445 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:52:43.995076 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:52:44.084518 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:52:44.084642 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:52:44.102875 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:52:44.122023 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:52:44.143180 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:52:44.150105 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:52:44.191492 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:52:44.213055 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:52:44.251194 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:52:44.273245 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:52:44.295224 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:52:44.314223 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:52:44.314421 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:52:44.347251 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:52:44.367179 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:52:44.383214 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:52:44.401167 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:52:44.423240 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:52:44.444179 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:52:44.462237 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:52:44.483180 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:52:44.503218 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:52:44.523209 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:52:44.541108 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:52:44.541332 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:52:44.572210 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:52:44.592202 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:52:44.610120 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:52:44.610325 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:52:44.629282 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:52:44.629516 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:52:44.659247 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:52:44.739002 ignition[982]: INFO : Ignition 2.19.0 Jan 30 13:52:44.739002 ignition[982]: INFO : Stage: umount Jan 30 13:52:44.739002 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:44.739002 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:52:44.739002 ignition[982]: INFO : umount: umount passed Jan 30 13:52:44.739002 ignition[982]: INFO : Ignition finished successfully Jan 30 13:52:44.659488 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:52:44.681296 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:52:44.681490 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:52:44.706098 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:52:44.748978 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:52:44.749253 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:52:44.774288 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:52:44.802154 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:52:44.802378 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:52:44.828277 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:52:44.828468 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:52:44.863017 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:52:44.864144 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:52:44.864267 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:52:44.879640 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:52:44.879764 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:52:44.902200 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:52:44.902328 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:52:44.924142 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:52:44.924210 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:52:44.944103 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:52:44.944193 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:52:44.962115 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:52:44.962204 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:52:44.982072 systemd[1]: Stopped target network.target - Network. Jan 30 13:52:44.998994 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:52:44.999120 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:52:45.009286 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:52:45.034996 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:52:45.039919 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:52:45.043171 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:52:45.061219 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:52:45.076248 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:52:45.076308 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:52:45.091231 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:52:45.091295 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:52:45.106214 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:52:45.106286 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:52:45.123259 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:52:45.123329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:52:45.141264 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:52:45.141347 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:52:45.175468 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:52:45.179924 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 30 13:52:45.183316 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:52:45.210535 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:52:45.210677 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:52:45.229637 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:52:45.230148 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:52:45.247509 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:52:45.247586 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:52:45.260959 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:52:45.289976 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:52:45.290109 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:52:45.309104 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:52:45.309201 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:52:45.329071 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:52:45.749854 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 13:52:45.329167 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:52:45.352097 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:52:45.352200 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:52:45.371248 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:52:45.390518 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:52:45.390715 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:52:45.416938 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:52:45.417040 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:52:45.434107 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:52:45.434185 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:52:45.463074 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:52:45.463184 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:52:45.491201 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:52:45.491415 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:52:45.521257 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:52:45.521350 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:45.556087 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:52:45.574967 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:52:45.575180 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:52:45.583228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:52:45.583293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:45.613660 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:52:45.613793 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:52:45.631592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:52:45.631720 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:52:45.653345 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:52:45.667054 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:52:45.706642 systemd[1]: Switching root. Jan 30 13:52:46.026012 systemd-journald[183]: Journal stopped Jan 30 13:52:48.481839 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:52:48.481897 kernel: SELinux: policy capability open_perms=1 Jan 30 13:52:48.481912 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:52:48.481924 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:52:48.481934 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:52:48.481945 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:52:48.481958 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:52:48.481972 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:52:48.481984 kernel: audit: type=1403 audit(1738245166.347:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:52:48.481998 systemd[1]: Successfully loaded SELinux policy in 81.528ms. Jan 30 13:52:48.482012 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.005ms. Jan 30 13:52:48.482026 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:52:48.482039 systemd[1]: Detected virtualization google. Jan 30 13:52:48.482051 systemd[1]: Detected architecture x86-64. Jan 30 13:52:48.482067 systemd[1]: Detected first boot. Jan 30 13:52:48.482081 systemd[1]: Initializing machine ID from random generator. Jan 30 13:52:48.482094 zram_generator::config[1023]: No configuration found. Jan 30 13:52:48.482107 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:52:48.482173 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:52:48.482200 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:52:48.482214 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:52:48.482227 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:52:48.482240 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:52:48.482253 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:52:48.482266 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:52:48.482279 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:52:48.482296 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:52:48.482309 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:52:48.482322 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:52:48.482335 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:52:48.482349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:52:48.482361 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:52:48.482374 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:52:48.482388 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:52:48.482407 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:52:48.482420 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:52:48.482433 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:52:48.482453 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:52:48.482466 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:52:48.482480 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:52:48.482497 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:52:48.482511 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:52:48.482525 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:52:48.482541 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:52:48.482554 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:52:48.482568 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:52:48.482581 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:52:48.482594 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:52:48.482608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:52:48.482621 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:52:48.482639 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:52:48.482652 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:52:48.482666 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:52:48.482679 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:52:48.482693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:52:48.482711 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:52:48.482725 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:52:48.482739 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:52:48.482753 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:52:48.482767 systemd[1]: Reached target machines.target - Containers. Jan 30 13:52:48.482781 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:52:48.482794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:52:48.482808 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:52:48.482845 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:52:48.482867 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:52:48.482883 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:52:48.482896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:52:48.482909 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:52:48.482923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:52:48.482936 kernel: ACPI: bus type drm_connector registered Jan 30 13:52:48.482949 kernel: fuse: init (API version 7.39) Jan 30 13:52:48.482966 kernel: loop: module loaded Jan 30 13:52:48.482979 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:52:48.482993 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:52:48.483006 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:52:48.483020 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:52:48.483035 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:52:48.483049 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:52:48.483063 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:52:48.483076 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:52:48.483121 systemd-journald[1110]: Collecting audit messages is disabled. Jan 30 13:52:48.483151 systemd-journald[1110]: Journal started Jan 30 13:52:48.483184 systemd-journald[1110]: Runtime Journal (/run/log/journal/41de97138d604f63be35fa4ec4d7970e) is 8.0M, max 148.7M, 140.7M free. Jan 30 13:52:47.238806 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:52:47.264332 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:52:47.264938 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:52:48.506858 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:52:48.536523 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:52:48.536619 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:52:48.536649 systemd[1]: Stopped verity-setup.service. Jan 30 13:52:48.585179 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:52:48.585282 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:52:48.597475 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:52:48.608264 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:52:48.618257 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:52:48.628272 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:52:48.639365 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:52:48.649327 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:52:48.660552 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:52:48.672478 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:52:48.684449 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:52:48.684691 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:52:48.697437 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:52:48.697676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:52:48.709396 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:52:48.709645 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:52:48.719412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:52:48.719657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:52:48.731423 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:52:48.731654 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:52:48.742317 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:52:48.742568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:52:48.752413 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:52:48.762386 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:52:48.774376 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:52:48.786395 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:52:48.811561 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:52:48.832997 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:52:48.844445 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:52:48.855013 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:52:48.855243 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:52:48.866346 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:52:48.884103 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:52:48.908006 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:52:48.918212 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:52:48.925433 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:52:48.941008 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:52:48.952220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:52:48.961572 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:52:48.979063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:52:48.990763 systemd-journald[1110]: Time spent on flushing to /var/log/journal/41de97138d604f63be35fa4ec4d7970e is 98.131ms for 927 entries. Jan 30 13:52:48.990763 systemd-journald[1110]: System Journal (/var/log/journal/41de97138d604f63be35fa4ec4d7970e) is 8.0M, max 584.8M, 576.8M free. Jan 30 13:52:49.143069 systemd-journald[1110]: Received client request to flush runtime journal. Jan 30 13:52:49.143207 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 13:52:48.990073 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:52:49.017248 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:52:49.035885 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:52:49.053427 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:52:49.068776 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:52:49.081242 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:52:49.098459 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:52:49.111036 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:52:49.133892 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:52:49.156402 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:52:49.168794 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:52:49.181956 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:52:49.205654 udevadm[1143]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:52:49.231002 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:52:49.237348 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:52:49.249700 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:52:49.252152 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:52:49.272103 kernel: loop1: detected capacity change from 0 to 54824 Jan 30 13:52:49.276252 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:52:49.345101 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 13:52:49.348430 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Jan 30 13:52:49.348467 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Jan 30 13:52:49.370351 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:52:49.466854 kernel: loop3: detected capacity change from 0 to 140768 Jan 30 13:52:49.554003 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 13:52:49.603860 kernel: loop5: detected capacity change from 0 to 54824 Jan 30 13:52:49.645885 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 13:52:49.687883 kernel: loop7: detected capacity change from 0 to 140768 Jan 30 13:52:49.743653 (sd-merge)[1165]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 30 13:52:49.747133 (sd-merge)[1165]: Merged extensions into '/usr'. Jan 30 13:52:49.754672 systemd[1]: Reloading requested from client PID 1141 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:52:49.754694 systemd[1]: Reloading... Jan 30 13:52:49.846765 zram_generator::config[1188]: No configuration found. Jan 30 13:52:50.153711 ldconfig[1136]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:52:50.185556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:52:50.291601 systemd[1]: Reloading finished in 536 ms. Jan 30 13:52:50.339448 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:52:50.350615 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:52:50.375132 systemd[1]: Starting ensure-sysext.service... Jan 30 13:52:50.393127 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:52:50.406648 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:52:50.433743 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:52:50.441503 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:52:50.442088 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:52:50.443618 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:52:50.444212 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 30 13:52:50.444332 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 30 13:52:50.445265 systemd[1]: Reloading requested from client PID 1232 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:52:50.445284 systemd[1]: Reloading... Jan 30 13:52:50.452610 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:52:50.455998 systemd-tmpfiles[1233]: Skipping /boot Jan 30 13:52:50.487119 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Jan 30 13:52:50.493275 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:52:50.493297 systemd-tmpfiles[1233]: Skipping /boot Jan 30 13:52:50.611853 zram_generator::config[1260]: No configuration found. Jan 30 13:52:50.825869 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:52:50.837864 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 30 13:52:50.838290 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:52:50.857754 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 30 13:52:50.859911 kernel: ACPI: button: Sleep Button [SLPF] Jan 30 13:52:50.904853 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1275) Jan 30 13:52:50.955854 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:52:50.997409 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:52:51.040859 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:52:51.050847 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:52:51.128857 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:52:51.129336 systemd[1]: Reloading finished in 683 ms. Jan 30 13:52:51.150173 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:52:51.166510 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:52:51.198365 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:52:51.217058 systemd[1]: Finished ensure-sysext.service. Jan 30 13:52:51.243446 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 13:52:51.255207 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:52:51.261096 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:52:51.278592 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:52:51.290544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:52:51.298081 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:52:51.315592 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:52:51.338193 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:52:51.342747 lvm[1343]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:52:51.355948 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:52:51.373901 augenrules[1357]: No rules Jan 30 13:52:51.375512 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:52:51.395207 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 13:52:51.404198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:52:51.410305 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:52:51.429764 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:52:51.450954 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:52:51.469465 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:52:51.480498 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:52:51.495493 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:52:51.515294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:52:51.525021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:52:51.535442 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:52:51.547686 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:52:51.559473 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:52:51.560209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:52:51.560426 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:52:51.560851 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:52:51.561383 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:52:51.561893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:52:51.562117 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:52:51.562685 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:52:51.562993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:52:51.568770 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:52:51.569652 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:52:51.581855 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 13:52:51.587695 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:52:51.593069 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:52:51.597057 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 30 13:52:51.597157 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:52:51.597278 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:52:51.603048 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:52:51.608004 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:52:51.608083 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:52:51.608867 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:52:51.620334 lvm[1385]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:52:51.641961 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:52:51.684758 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:52:51.708603 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 30 13:52:51.721258 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:52:51.739927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:51.814185 systemd-networkd[1367]: lo: Link UP Jan 30 13:52:51.814202 systemd-networkd[1367]: lo: Gained carrier Jan 30 13:52:51.816439 systemd-networkd[1367]: Enumeration completed Jan 30 13:52:51.816619 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:52:51.817382 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:52:51.817390 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:52:51.819279 systemd-networkd[1367]: eth0: Link UP Jan 30 13:52:51.819287 systemd-networkd[1367]: eth0: Gained carrier Jan 30 13:52:51.819311 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:52:51.830848 systemd-resolved[1368]: Positive Trust Anchors: Jan 30 13:52:51.830876 systemd-resolved[1368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:52:51.830948 systemd-resolved[1368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:52:51.831993 systemd-networkd[1367]: eth0: DHCPv4 address 10.128.0.23/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 13:52:51.834069 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:52:51.839363 systemd-resolved[1368]: Defaulting to hostname 'linux'. Jan 30 13:52:51.846190 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:52:51.856128 systemd[1]: Reached target network.target - Network. Jan 30 13:52:51.865002 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:52:51.876022 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:52:51.886165 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:52:51.897126 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:52:51.908275 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:52:51.918285 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:52:51.930060 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:52:51.941044 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:52:51.941107 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:52:51.950055 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:52:51.959673 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:52:51.971714 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:52:51.984989 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:52:51.995964 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:52:52.006171 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:52:52.015986 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:52:52.024044 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:52:52.024097 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:52:52.036009 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:52:52.055082 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:52:52.082376 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:52:52.100074 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:52:52.118123 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:52:52.128020 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:52:52.137856 jq[1420]: false Jan 30 13:52:52.141114 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:52:52.141543 coreos-metadata[1416]: Jan 30 13:52:52.140 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 30 13:52:52.144178 coreos-metadata[1416]: Jan 30 13:52:52.144 INFO Fetch successful Jan 30 13:52:52.144178 coreos-metadata[1416]: Jan 30 13:52:52.144 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 30 13:52:52.145708 coreos-metadata[1416]: Jan 30 13:52:52.145 INFO Fetch successful Jan 30 13:52:52.145851 coreos-metadata[1416]: Jan 30 13:52:52.145 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 30 13:52:52.147460 coreos-metadata[1416]: Jan 30 13:52:52.146 INFO Fetch successful Jan 30 13:52:52.147460 coreos-metadata[1416]: Jan 30 13:52:52.147 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 30 13:52:52.148439 coreos-metadata[1416]: Jan 30 13:52:52.148 INFO Fetch successful Jan 30 13:52:52.160081 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 13:52:52.175932 extend-filesystems[1421]: Found loop4 Jan 30 13:52:52.175932 extend-filesystems[1421]: Found loop5 Jan 30 13:52:52.175932 extend-filesystems[1421]: Found loop6 Jan 30 13:52:52.175932 extend-filesystems[1421]: Found loop7 Jan 30 13:52:52.175932 extend-filesystems[1421]: Found sda Jan 30 13:52:52.175932 extend-filesystems[1421]: Found sda1 Jan 30 13:52:52.248708 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 30 13:52:52.238536 dbus-daemon[1417]: [system] SELinux support is enabled Jan 30 13:52:52.249312 extend-filesystems[1421]: Found sda2 Jan 30 13:52:52.249312 extend-filesystems[1421]: Found sda3 Jan 30 13:52:52.249312 extend-filesystems[1421]: Found usr Jan 30 13:52:52.249312 extend-filesystems[1421]: Found sda4 Jan 30 13:52:52.249312 extend-filesystems[1421]: Found sda6 Jan 30 13:52:52.249312 extend-filesystems[1421]: Found sda7 Jan 30 13:52:52.249312 extend-filesystems[1421]: Found sda9 Jan 30 13:52:52.249312 extend-filesystems[1421]: Checking size of /dev/sda9 Jan 30 13:52:52.249312 extend-filesystems[1421]: Resized partition /dev/sda9 Jan 30 13:52:52.397796 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 30 13:52:52.397876 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1265) Jan 30 13:52:52.182554 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:52:52.241376 dbus-daemon[1417]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1367 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: ---------------------------------------------------- Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: corporation. Support and training for ntp-4 are Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: available at https://www.nwtime.org/support Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: ---------------------------------------------------- Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: proto: precision = 0.086 usec (-23) Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: basedate set to 2025-01-17 Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: gps base set to 2025-01-19 (week 2350) Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: Listen normally on 3 eth0 10.128.0.23:123 Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: Listen normally on 4 lo [::1]:123 Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: bind(21) AF_INET6 fe80::4001:aff:fe80:17%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:17%2#123 Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: failed to init interface for address fe80::4001:aff:fe80:17%2 Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: Listening on routing socket on fd #21 for interface updates Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:52.398162 ntpd[1424]: 30 Jan 13:52:52 ntpd[1424]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:52.399576 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:52:52.399576 extend-filesystems[1437]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 13:52:52.399576 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 30 13:52:52.399576 extend-filesystems[1437]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 30 13:52:52.201340 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:52:52.280269 ntpd[1424]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:52:52.468470 extend-filesystems[1421]: Resized filesystem in /dev/sda9 Jan 30 13:52:52.295167 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:52:52.280305 ntpd[1424]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:52:52.313117 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:52:52.280320 ntpd[1424]: ---------------------------------------------------- Jan 30 13:52:52.323647 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 30 13:52:52.280334 ntpd[1424]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:52:52.324485 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:52:52.280347 ntpd[1424]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:52:52.482971 update_engine[1446]: I20250130 13:52:52.471136 1446 main.cc:92] Flatcar Update Engine starting Jan 30 13:52:52.482971 update_engine[1446]: I20250130 13:52:52.473624 1446 update_check_scheduler.cc:74] Next update check in 5m8s Jan 30 13:52:52.332075 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:52:52.280360 ntpd[1424]: corporation. Support and training for ntp-4 are Jan 30 13:52:52.483590 jq[1450]: true Jan 30 13:52:52.381997 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:52:52.280374 ntpd[1424]: available at https://www.nwtime.org/support Jan 30 13:52:52.409755 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:52:52.280388 ntpd[1424]: ---------------------------------------------------- Jan 30 13:52:52.433492 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:52:52.283319 ntpd[1424]: proto: precision = 0.086 usec (-23) Jan 30 13:52:52.433769 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:52:52.283744 ntpd[1424]: basedate set to 2025-01-17 Jan 30 13:52:52.435061 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:52:52.283768 ntpd[1424]: gps base set to 2025-01-19 (week 2350) Jan 30 13:52:52.435329 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:52:52.291075 ntpd[1424]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:52:52.446379 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:52:52.291144 ntpd[1424]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:52:52.446535 systemd-logind[1445]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 13:52:52.291423 ntpd[1424]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:52:52.446569 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:52:52.291479 ntpd[1424]: Listen normally on 3 eth0 10.128.0.23:123 Jan 30 13:52:52.449105 systemd-logind[1445]: New seat seat0. Jan 30 13:52:52.291538 ntpd[1424]: Listen normally on 4 lo [::1]:123 Jan 30 13:52:52.291600 ntpd[1424]: bind(21) AF_INET6 fe80::4001:aff:fe80:17%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:52:52.291629 ntpd[1424]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:17%2#123 Jan 30 13:52:52.291650 ntpd[1424]: failed to init interface for address fe80::4001:aff:fe80:17%2 Jan 30 13:52:52.291694 ntpd[1424]: Listening on routing socket on fd #21 for interface updates Jan 30 13:52:52.297971 ntpd[1424]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:52.298014 ntpd[1424]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:52.491601 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:52:52.503012 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:52:52.503301 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:52:52.520383 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:52:52.520709 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:52:52.559860 jq[1454]: true Jan 30 13:52:52.581382 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:52:52.598576 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:52:52.617694 dbus-daemon[1417]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:52:52.654222 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:52:52.667254 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:52:52.667541 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:52:52.667775 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:52:52.686204 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 13:52:52.695997 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:52:52.696269 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:52:52.722495 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:52:52.746859 tar[1453]: linux-amd64/helm Jan 30 13:52:52.757184 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:52:52.762124 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:52:52.785294 systemd[1]: Starting sshkeys.service... Jan 30 13:52:52.888539 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:52:52.913956 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:52:53.072164 coreos-metadata[1489]: Jan 30 13:52:53.071 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 30 13:52:53.073418 coreos-metadata[1489]: Jan 30 13:52:53.073 INFO Fetch failed with 404: resource not found Jan 30 13:52:53.073418 coreos-metadata[1489]: Jan 30 13:52:53.073 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 30 13:52:53.074575 coreos-metadata[1489]: Jan 30 13:52:53.074 INFO Fetch successful Jan 30 13:52:53.074575 coreos-metadata[1489]: Jan 30 13:52:53.074 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 30 13:52:53.078984 coreos-metadata[1489]: Jan 30 13:52:53.077 INFO Fetch failed with 404: resource not found Jan 30 13:52:53.078984 coreos-metadata[1489]: Jan 30 13:52:53.077 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 30 13:52:53.078984 coreos-metadata[1489]: Jan 30 13:52:53.078 INFO Fetch failed with 404: resource not found Jan 30 13:52:53.078984 coreos-metadata[1489]: Jan 30 13:52:53.078 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 30 13:52:53.082962 coreos-metadata[1489]: Jan 30 13:52:53.079 INFO Fetch successful Jan 30 13:52:53.088946 unknown[1489]: wrote ssh authorized keys file for user: core Jan 30 13:52:53.160982 dbus-daemon[1417]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 13:52:53.161524 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:52:53.162979 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 13:52:53.166108 dbus-daemon[1417]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1484 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 13:52:53.181087 update-ssh-keys[1502]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:52:53.186639 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 13:52:53.195676 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:52:53.216145 systemd[1]: Finished sshkeys.service. Jan 30 13:52:53.282170 ntpd[1424]: bind(24) AF_INET6 fe80::4001:aff:fe80:17%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:52:53.282722 ntpd[1424]: 30 Jan 13:52:53 ntpd[1424]: bind(24) AF_INET6 fe80::4001:aff:fe80:17%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:52:53.282722 ntpd[1424]: 30 Jan 13:52:53 ntpd[1424]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:17%2#123 Jan 30 13:52:53.282722 ntpd[1424]: 30 Jan 13:52:53 ntpd[1424]: failed to init interface for address fe80::4001:aff:fe80:17%2 Jan 30 13:52:53.282223 ntpd[1424]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:17%2#123 Jan 30 13:52:53.282245 ntpd[1424]: failed to init interface for address fe80::4001:aff:fe80:17%2 Jan 30 13:52:53.349411 polkitd[1504]: Started polkitd version 121 Jan 30 13:52:53.361269 polkitd[1504]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 13:52:53.361622 polkitd[1504]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 13:52:53.362521 polkitd[1504]: Finished loading, compiling and executing 2 rules Jan 30 13:52:53.365878 dbus-daemon[1417]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 13:52:53.366180 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 13:52:53.367523 polkitd[1504]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 13:52:53.405685 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:52:53.410537 systemd-hostnamed[1484]: Hostname set to (transient) Jan 30 13:52:53.413136 systemd-resolved[1368]: System hostname changed to 'ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal'. Jan 30 13:52:53.460306 containerd[1455]: time="2025-01-30T13:52:53.460185131Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:52:53.465116 systemd-networkd[1367]: eth0: Gained IPv6LL Jan 30 13:52:53.471920 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:52:53.484282 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:52:53.504285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:53.524280 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:52:53.541259 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 30 13:52:53.551194 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:52:53.571732 init.sh[1526]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 30 13:52:53.572712 init.sh[1526]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 30 13:52:53.572712 init.sh[1526]: + /usr/bin/google_instance_setup Jan 30 13:52:53.577922 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:52:53.580204 containerd[1455]: time="2025-01-30T13:52:53.580120020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:53.584388 containerd[1455]: time="2025-01-30T13:52:53.584337334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:53.584539 containerd[1455]: time="2025-01-30T13:52:53.584516426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:52:53.588359 containerd[1455]: time="2025-01-30T13:52:53.587895366Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:52:53.588359 containerd[1455]: time="2025-01-30T13:52:53.588130184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:52:53.588359 containerd[1455]: time="2025-01-30T13:52:53.588159210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:53.588359 containerd[1455]: time="2025-01-30T13:52:53.588248816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:53.588359 containerd[1455]: time="2025-01-30T13:52:53.588273663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:53.590056 containerd[1455]: time="2025-01-30T13:52:53.589908466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:53.590056 containerd[1455]: time="2025-01-30T13:52:53.589950500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:53.590056 containerd[1455]: time="2025-01-30T13:52:53.590002038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:53.590335 containerd[1455]: time="2025-01-30T13:52:53.590023227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:53.591377 containerd[1455]: time="2025-01-30T13:52:53.590630644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:53.593807 containerd[1455]: time="2025-01-30T13:52:53.593044506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:53.593807 containerd[1455]: time="2025-01-30T13:52:53.593289135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:53.593807 containerd[1455]: time="2025-01-30T13:52:53.593336933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:52:53.594108 containerd[1455]: time="2025-01-30T13:52:53.593903566Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:52:53.599372 containerd[1455]: time="2025-01-30T13:52:53.595654203Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:52:53.609909 containerd[1455]: time="2025-01-30T13:52:53.608519719Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:52:53.609909 containerd[1455]: time="2025-01-30T13:52:53.608631580Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:52:53.609909 containerd[1455]: time="2025-01-30T13:52:53.608664706Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:52:53.609909 containerd[1455]: time="2025-01-30T13:52:53.608747429Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:52:53.609909 containerd[1455]: time="2025-01-30T13:52:53.608775393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:52:53.609909 containerd[1455]: time="2025-01-30T13:52:53.608992010Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:52:53.609909 containerd[1455]: time="2025-01-30T13:52:53.609562152Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:52:53.609909 containerd[1455]: time="2025-01-30T13:52:53.609733356Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:52:53.609909 containerd[1455]: time="2025-01-30T13:52:53.609759189Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:52:53.609909 containerd[1455]: time="2025-01-30T13:52:53.609780982Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:52:53.609909 containerd[1455]: time="2025-01-30T13:52:53.609806821Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:52:53.610615 containerd[1455]: time="2025-01-30T13:52:53.610430874Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:52:53.610615 containerd[1455]: time="2025-01-30T13:52:53.610491387Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:52:53.610615 containerd[1455]: time="2025-01-30T13:52:53.610522248Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:52:53.610615 containerd[1455]: time="2025-01-30T13:52:53.610552750Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:52:53.610615 containerd[1455]: time="2025-01-30T13:52:53.610575716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:52:53.610898 containerd[1455]: time="2025-01-30T13:52:53.610596006Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:52:53.610898 containerd[1455]: time="2025-01-30T13:52:53.610704826Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:52:53.610898 containerd[1455]: time="2025-01-30T13:52:53.610751060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.610898 containerd[1455]: time="2025-01-30T13:52:53.610776575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.610898 containerd[1455]: time="2025-01-30T13:52:53.610797984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.610898 containerd[1455]: time="2025-01-30T13:52:53.610842339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.610898 containerd[1455]: time="2025-01-30T13:52:53.610866188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.610898 containerd[1455]: time="2025-01-30T13:52:53.610889048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.611250 containerd[1455]: time="2025-01-30T13:52:53.610910716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.611250 containerd[1455]: time="2025-01-30T13:52:53.610951829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.611250 containerd[1455]: time="2025-01-30T13:52:53.610982952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.611250 containerd[1455]: time="2025-01-30T13:52:53.611010055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.611250 containerd[1455]: time="2025-01-30T13:52:53.611040352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.611250 containerd[1455]: time="2025-01-30T13:52:53.611061870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.611250 containerd[1455]: time="2025-01-30T13:52:53.611082126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.611250 containerd[1455]: time="2025-01-30T13:52:53.611109225Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:52:53.611250 containerd[1455]: time="2025-01-30T13:52:53.611166204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.611250 containerd[1455]: time="2025-01-30T13:52:53.611195023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.611250 containerd[1455]: time="2025-01-30T13:52:53.611216193Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:52:53.615883 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:52:53.617328 containerd[1455]: time="2025-01-30T13:52:53.615763985Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:52:53.617328 containerd[1455]: time="2025-01-30T13:52:53.616125120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:52:53.617328 containerd[1455]: time="2025-01-30T13:52:53.616153203Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:52:53.617328 containerd[1455]: time="2025-01-30T13:52:53.616176851Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:52:53.617328 containerd[1455]: time="2025-01-30T13:52:53.616194288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.617328 containerd[1455]: time="2025-01-30T13:52:53.616217450Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:52:53.617328 containerd[1455]: time="2025-01-30T13:52:53.616234230Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:52:53.617328 containerd[1455]: time="2025-01-30T13:52:53.616252647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:52:53.620212 containerd[1455]: time="2025-01-30T13:52:53.617975133Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:52:53.620212 containerd[1455]: time="2025-01-30T13:52:53.618190396Z" level=info msg="Connect containerd service" Jan 30 13:52:53.620212 containerd[1455]: time="2025-01-30T13:52:53.618254712Z" level=info msg="using legacy CRI server" Jan 30 13:52:53.620212 containerd[1455]: time="2025-01-30T13:52:53.618268005Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:52:53.620212 containerd[1455]: time="2025-01-30T13:52:53.618427839Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:52:53.620212 containerd[1455]: time="2025-01-30T13:52:53.619564904Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:52:53.623618 containerd[1455]: time="2025-01-30T13:52:53.621272912Z" level=info msg="Start subscribing containerd event" Jan 30 13:52:53.623618 containerd[1455]: time="2025-01-30T13:52:53.621362477Z" level=info msg="Start recovering state" Jan 30 13:52:53.623618 containerd[1455]: time="2025-01-30T13:52:53.621462513Z" level=info msg="Start event monitor" Jan 30 13:52:53.623618 containerd[1455]: time="2025-01-30T13:52:53.621492812Z" level=info msg="Start snapshots syncer" Jan 30 13:52:53.623618 containerd[1455]: time="2025-01-30T13:52:53.621510634Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:52:53.623618 containerd[1455]: time="2025-01-30T13:52:53.621523889Z" level=info msg="Start streaming server" Jan 30 13:52:53.623618 containerd[1455]: time="2025-01-30T13:52:53.622811599Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:52:53.623618 containerd[1455]: time="2025-01-30T13:52:53.623258818Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:52:53.623618 containerd[1455]: time="2025-01-30T13:52:53.623341107Z" level=info msg="containerd successfully booted in 0.165709s" Jan 30 13:52:53.626590 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:52:53.637185 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:52:53.637456 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:52:53.660269 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:52:53.714152 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:52:53.735507 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:52:53.753629 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:52:53.764310 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:52:53.819160 tar[1453]: linux-amd64/LICENSE Jan 30 13:52:53.819160 tar[1453]: linux-amd64/README.md Jan 30 13:52:53.837943 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:52:54.219959 instance-setup[1531]: INFO Running google_set_multiqueue. Jan 30 13:52:54.240304 instance-setup[1531]: INFO Set channels for eth0 to 2. Jan 30 13:52:54.244564 instance-setup[1531]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Jan 30 13:52:54.246911 instance-setup[1531]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Jan 30 13:52:54.247338 instance-setup[1531]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Jan 30 13:52:54.249199 instance-setup[1531]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Jan 30 13:52:54.249597 instance-setup[1531]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Jan 30 13:52:54.253146 instance-setup[1531]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Jan 30 13:52:54.253208 instance-setup[1531]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Jan 30 13:52:54.255128 instance-setup[1531]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Jan 30 13:52:54.264358 instance-setup[1531]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 30 13:52:54.269024 instance-setup[1531]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 30 13:52:54.270959 instance-setup[1531]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 30 13:52:54.271358 instance-setup[1531]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 30 13:52:54.293533 init.sh[1526]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 30 13:52:54.438575 startup-script[1579]: INFO Starting startup scripts. Jan 30 13:52:54.445171 startup-script[1579]: INFO No startup scripts found in metadata. Jan 30 13:52:54.445247 startup-script[1579]: INFO Finished running startup scripts. Jan 30 13:52:54.467880 init.sh[1526]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 30 13:52:54.467880 init.sh[1526]: + daemon_pids=() Jan 30 13:52:54.467880 init.sh[1526]: + for d in accounts clock_skew network Jan 30 13:52:54.467880 init.sh[1526]: + daemon_pids+=($!) Jan 30 13:52:54.467880 init.sh[1526]: + for d in accounts clock_skew network Jan 30 13:52:54.467880 init.sh[1526]: + daemon_pids+=($!) Jan 30 13:52:54.467880 init.sh[1526]: + for d in accounts clock_skew network Jan 30 13:52:54.468264 init.sh[1526]: + daemon_pids+=($!) Jan 30 13:52:54.468264 init.sh[1526]: + NOTIFY_SOCKET=/run/systemd/notify Jan 30 13:52:54.468264 init.sh[1526]: + /usr/bin/systemd-notify --ready Jan 30 13:52:54.468847 init.sh[1582]: + /usr/bin/google_accounts_daemon Jan 30 13:52:54.469233 init.sh[1583]: + /usr/bin/google_clock_skew_daemon Jan 30 13:52:54.469561 init.sh[1584]: + /usr/bin/google_network_daemon Jan 30 13:52:54.490191 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 30 13:52:54.506211 init.sh[1526]: + wait -n 1582 1583 1584 Jan 30 13:52:54.762170 google-networking[1584]: INFO Starting Google Networking daemon. Jan 30 13:52:54.854978 google-clock-skew[1583]: INFO Starting Google Clock Skew daemon. Jan 30 13:52:54.862519 google-clock-skew[1583]: INFO Clock drift token has changed: 0. Jan 30 13:52:54.884111 groupadd[1593]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 30 13:52:54.888001 groupadd[1593]: group added to /etc/gshadow: name=google-sudoers Jan 30 13:52:54.948693 groupadd[1593]: new group: name=google-sudoers, GID=1000 Jan 30 13:52:54.982283 google-accounts[1582]: INFO Starting Google Accounts daemon. Jan 30 13:52:54.994227 google-accounts[1582]: WARNING OS Login not installed. Jan 30 13:52:54.996164 google-accounts[1582]: INFO Creating a new user account for 0. Jan 30 13:52:55.001145 init.sh[1602]: useradd: invalid user name '0': use --badname to ignore Jan 30 13:52:55.001484 google-accounts[1582]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 30 13:52:55.000314 systemd-resolved[1368]: Clock change detected. Flushing caches. Jan 30 13:52:55.014498 systemd-journald[1110]: Time jumped backwards, rotating. Jan 30 13:52:55.001337 google-clock-skew[1583]: INFO Synced system time with hardware clock. Jan 30 13:52:55.137949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:55.151180 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:52:55.160666 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:52:55.162169 systemd[1]: Startup finished in 1.039s (kernel) + 10.572s (initrd) + 8.973s (userspace) = 20.586s. Jan 30 13:52:56.100270 kubelet[1610]: E0130 13:52:56.100173 1610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:52:56.102405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:52:56.102660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:52:56.103188 systemd[1]: kubelet.service: Consumed 1.233s CPU time. Jan 30 13:52:56.200195 ntpd[1424]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:17%2]:123 Jan 30 13:52:56.200583 ntpd[1424]: 30 Jan 13:52:56 ntpd[1424]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:17%2]:123 Jan 30 13:53:02.425404 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:53:02.434478 systemd[1]: Started sshd@0-10.128.0.23:22-139.178.68.195:54546.service - OpenSSH per-connection server daemon (139.178.68.195:54546). Jan 30 13:53:02.715343 sshd[1623]: Accepted publickey for core from 139.178.68.195 port 54546 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:53:02.718362 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:02.732040 systemd-logind[1445]: New session 1 of user core. Jan 30 13:53:02.732936 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:53:02.739422 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:53:02.756984 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:53:02.765501 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:53:02.785886 (systemd)[1627]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:53:02.914132 systemd[1627]: Queued start job for default target default.target. Jan 30 13:53:02.925567 systemd[1627]: Created slice app.slice - User Application Slice. Jan 30 13:53:02.925618 systemd[1627]: Reached target paths.target - Paths. Jan 30 13:53:02.925644 systemd[1627]: Reached target timers.target - Timers. Jan 30 13:53:02.927368 systemd[1627]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:53:02.951199 systemd[1627]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:53:02.951394 systemd[1627]: Reached target sockets.target - Sockets. Jan 30 13:53:02.951420 systemd[1627]: Reached target basic.target - Basic System. Jan 30 13:53:02.951490 systemd[1627]: Reached target default.target - Main User Target. Jan 30 13:53:02.951545 systemd[1627]: Startup finished in 156ms. Jan 30 13:53:02.951896 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:53:02.962346 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:53:03.199528 systemd[1]: Started sshd@1-10.128.0.23:22-139.178.68.195:54548.service - OpenSSH per-connection server daemon (139.178.68.195:54548). Jan 30 13:53:03.474585 sshd[1638]: Accepted publickey for core from 139.178.68.195 port 54548 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:53:03.476446 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:03.482633 systemd-logind[1445]: New session 2 of user core. Jan 30 13:53:03.489301 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:53:03.686865 sshd[1638]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:03.692146 systemd[1]: sshd@1-10.128.0.23:22-139.178.68.195:54548.service: Deactivated successfully. Jan 30 13:53:03.694520 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:53:03.695434 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:53:03.696799 systemd-logind[1445]: Removed session 2. Jan 30 13:53:03.740482 systemd[1]: Started sshd@2-10.128.0.23:22-139.178.68.195:54552.service - OpenSSH per-connection server daemon (139.178.68.195:54552). Jan 30 13:53:04.014710 sshd[1645]: Accepted publickey for core from 139.178.68.195 port 54552 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:53:04.016585 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:04.023070 systemd-logind[1445]: New session 3 of user core. Jan 30 13:53:04.029326 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:53:04.220866 sshd[1645]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:04.226269 systemd[1]: sshd@2-10.128.0.23:22-139.178.68.195:54552.service: Deactivated successfully. Jan 30 13:53:04.228668 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:53:04.229621 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:53:04.231046 systemd-logind[1445]: Removed session 3. Jan 30 13:53:04.277478 systemd[1]: Started sshd@3-10.128.0.23:22-139.178.68.195:54556.service - OpenSSH per-connection server daemon (139.178.68.195:54556). Jan 30 13:53:04.564766 sshd[1652]: Accepted publickey for core from 139.178.68.195 port 54556 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:53:04.566596 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:04.572931 systemd-logind[1445]: New session 4 of user core. Jan 30 13:53:04.578273 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:53:04.776531 sshd[1652]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:04.780812 systemd[1]: sshd@3-10.128.0.23:22-139.178.68.195:54556.service: Deactivated successfully. Jan 30 13:53:04.783176 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:53:04.784950 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:53:04.786479 systemd-logind[1445]: Removed session 4. Jan 30 13:53:04.833477 systemd[1]: Started sshd@4-10.128.0.23:22-139.178.68.195:47716.service - OpenSSH per-connection server daemon (139.178.68.195:47716). Jan 30 13:53:05.112399 sshd[1659]: Accepted publickey for core from 139.178.68.195 port 47716 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:53:05.114345 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:05.120603 systemd-logind[1445]: New session 5 of user core. Jan 30 13:53:05.127292 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:53:05.306130 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:53:05.306639 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:05.323829 sudo[1662]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:05.365866 sshd[1659]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:05.371286 systemd[1]: sshd@4-10.128.0.23:22-139.178.68.195:47716.service: Deactivated successfully. Jan 30 13:53:05.373592 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:53:05.375505 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:53:05.377042 systemd-logind[1445]: Removed session 5. Jan 30 13:53:05.418488 systemd[1]: Started sshd@5-10.128.0.23:22-139.178.68.195:47724.service - OpenSSH per-connection server daemon (139.178.68.195:47724). Jan 30 13:53:05.704048 sshd[1667]: Accepted publickey for core from 139.178.68.195 port 47724 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:53:05.705566 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:05.711854 systemd-logind[1445]: New session 6 of user core. Jan 30 13:53:05.725302 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:53:05.881056 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:53:05.881579 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:05.886488 sudo[1671]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:05.900001 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:53:05.900503 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:05.917553 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:53:05.931417 auditctl[1674]: No rules Jan 30 13:53:05.932000 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:53:05.932276 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:53:05.938783 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:53:05.973183 augenrules[1692]: No rules Jan 30 13:53:05.974298 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:53:05.976259 sudo[1670]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:06.017601 sshd[1667]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:06.022137 systemd[1]: sshd@5-10.128.0.23:22-139.178.68.195:47724.service: Deactivated successfully. Jan 30 13:53:06.024466 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:53:06.026311 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:53:06.027844 systemd-logind[1445]: Removed session 6. Jan 30 13:53:06.071500 systemd[1]: Started sshd@6-10.128.0.23:22-139.178.68.195:47730.service - OpenSSH per-connection server daemon (139.178.68.195:47730). Jan 30 13:53:06.299458 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:53:06.310693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:06.360534 sshd[1700]: Accepted publickey for core from 139.178.68.195 port 47730 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:53:06.362419 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:06.369240 systemd-logind[1445]: New session 7 of user core. Jan 30 13:53:06.376351 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:53:06.541144 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:53:06.541655 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:06.593322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:06.614754 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:53:06.697912 kubelet[1716]: E0130 13:53:06.697842 1716 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:53:06.703463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:53:06.703701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:53:07.018466 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:53:07.030699 (dockerd)[1735]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:53:07.452947 dockerd[1735]: time="2025-01-30T13:53:07.452861735Z" level=info msg="Starting up" Jan 30 13:53:07.589042 dockerd[1735]: time="2025-01-30T13:53:07.588720140Z" level=info msg="Loading containers: start." Jan 30 13:53:07.739135 kernel: Initializing XFRM netlink socket Jan 30 13:53:07.847238 systemd-networkd[1367]: docker0: Link UP Jan 30 13:53:07.868969 dockerd[1735]: time="2025-01-30T13:53:07.868911073Z" level=info msg="Loading containers: done." Jan 30 13:53:07.887939 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2442588388-merged.mount: Deactivated successfully. Jan 30 13:53:07.889800 dockerd[1735]: time="2025-01-30T13:53:07.889328971Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:53:07.889800 dockerd[1735]: time="2025-01-30T13:53:07.889477003Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:53:07.889800 dockerd[1735]: time="2025-01-30T13:53:07.889637967Z" level=info msg="Daemon has completed initialization" Jan 30 13:53:07.929230 dockerd[1735]: time="2025-01-30T13:53:07.929157456Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:53:07.929417 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:53:08.999735 containerd[1455]: time="2025-01-30T13:53:08.999389523Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:53:09.512872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951178956.mount: Deactivated successfully. Jan 30 13:53:11.188048 containerd[1455]: time="2025-01-30T13:53:11.187970699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:11.189712 containerd[1455]: time="2025-01-30T13:53:11.189646306Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32683640" Jan 30 13:53:11.190845 containerd[1455]: time="2025-01-30T13:53:11.190771309Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:11.194470 containerd[1455]: time="2025-01-30T13:53:11.194392630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:11.196101 containerd[1455]: time="2025-01-30T13:53:11.195828307Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.196376782s" Jan 30 13:53:11.196101 containerd[1455]: time="2025-01-30T13:53:11.195882956Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:53:11.228787 containerd[1455]: time="2025-01-30T13:53:11.228709222Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:53:12.799426 containerd[1455]: time="2025-01-30T13:53:12.799348256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:12.801061 containerd[1455]: time="2025-01-30T13:53:12.800988415Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29607679" Jan 30 13:53:12.802159 containerd[1455]: time="2025-01-30T13:53:12.802070406Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:12.805643 containerd[1455]: time="2025-01-30T13:53:12.805569373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:12.807634 containerd[1455]: time="2025-01-30T13:53:12.807059755Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.578293336s" Jan 30 13:53:12.807634 containerd[1455]: time="2025-01-30T13:53:12.807128967Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:53:12.840136 containerd[1455]: time="2025-01-30T13:53:12.840062599Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:53:13.915717 containerd[1455]: time="2025-01-30T13:53:13.915642273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:13.917315 containerd[1455]: time="2025-01-30T13:53:13.917240677Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17784980" Jan 30 13:53:13.918669 containerd[1455]: time="2025-01-30T13:53:13.918601494Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:13.922452 containerd[1455]: time="2025-01-30T13:53:13.922366635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:13.924044 containerd[1455]: time="2025-01-30T13:53:13.923844005Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.083482313s" Jan 30 13:53:13.924044 containerd[1455]: time="2025-01-30T13:53:13.923899731Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:53:13.953912 containerd[1455]: time="2025-01-30T13:53:13.953841690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:53:15.048367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3497688848.mount: Deactivated successfully. Jan 30 13:53:15.615691 containerd[1455]: time="2025-01-30T13:53:15.615604963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:15.617005 containerd[1455]: time="2025-01-30T13:53:15.616923122Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29060232" Jan 30 13:53:15.618473 containerd[1455]: time="2025-01-30T13:53:15.618394776Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:15.621730 containerd[1455]: time="2025-01-30T13:53:15.621638723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:15.622784 containerd[1455]: time="2025-01-30T13:53:15.622582939Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.668675276s" Jan 30 13:53:15.622784 containerd[1455]: time="2025-01-30T13:53:15.622633326Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:53:15.652314 containerd[1455]: time="2025-01-30T13:53:15.652260227Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:53:16.118234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796290380.mount: Deactivated successfully. Jan 30 13:53:16.808943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:53:16.816243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:17.101454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:17.112706 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:53:17.212728 kubelet[2020]: E0130 13:53:17.212668 2020 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:53:17.218490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:53:17.218778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:53:17.277074 containerd[1455]: time="2025-01-30T13:53:17.277002108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:17.278738 containerd[1455]: time="2025-01-30T13:53:17.278673704Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 30 13:53:17.279931 containerd[1455]: time="2025-01-30T13:53:17.279831050Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:17.285917 containerd[1455]: time="2025-01-30T13:53:17.285834374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:17.287910 containerd[1455]: time="2025-01-30T13:53:17.287295882Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.634973642s" Jan 30 13:53:17.287910 containerd[1455]: time="2025-01-30T13:53:17.287344713Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:53:17.316963 containerd[1455]: time="2025-01-30T13:53:17.316911848Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:53:17.719928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143811543.mount: Deactivated successfully. Jan 30 13:53:17.726667 containerd[1455]: time="2025-01-30T13:53:17.726602058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:17.727876 containerd[1455]: time="2025-01-30T13:53:17.727799775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Jan 30 13:53:17.729377 containerd[1455]: time="2025-01-30T13:53:17.729309493Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:17.733157 containerd[1455]: time="2025-01-30T13:53:17.733042978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:17.735073 containerd[1455]: time="2025-01-30T13:53:17.734291057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 417.321109ms" Jan 30 13:53:17.735073 containerd[1455]: time="2025-01-30T13:53:17.734336550Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:53:17.762860 containerd[1455]: time="2025-01-30T13:53:17.762812597Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:53:18.179979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3936324668.mount: Deactivated successfully. Jan 30 13:53:20.351248 containerd[1455]: time="2025-01-30T13:53:20.351165877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:20.353025 containerd[1455]: time="2025-01-30T13:53:20.352937145Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Jan 30 13:53:20.354278 containerd[1455]: time="2025-01-30T13:53:20.354183113Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:20.359164 containerd[1455]: time="2025-01-30T13:53:20.359074540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:20.361371 containerd[1455]: time="2025-01-30T13:53:20.361312785Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.59844802s" Jan 30 13:53:20.361371 containerd[1455]: time="2025-01-30T13:53:20.361369788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:53:23.053674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:23.060529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:23.095888 systemd[1]: Reloading requested from client PID 2149 ('systemctl') (unit session-7.scope)... Jan 30 13:53:23.095912 systemd[1]: Reloading... Jan 30 13:53:23.239121 zram_generator::config[2186]: No configuration found. Jan 30 13:53:23.415856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:53:23.518640 systemd[1]: Reloading finished in 422 ms. Jan 30 13:53:23.554195 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 13:53:23.590265 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:53:23.590404 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:53:23.590774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:23.594661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:23.898496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:23.908951 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:53:23.968133 kubelet[2244]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:23.968133 kubelet[2244]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:53:23.968655 kubelet[2244]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:23.968655 kubelet[2244]: I0130 13:53:23.968237 2244 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:53:24.695374 kubelet[2244]: I0130 13:53:24.695313 2244 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:53:24.695374 kubelet[2244]: I0130 13:53:24.695347 2244 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:53:24.695691 kubelet[2244]: I0130 13:53:24.695650 2244 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:53:24.722834 kubelet[2244]: I0130 13:53:24.722254 2244 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:53:24.723380 kubelet[2244]: E0130 13:53:24.723112 2244 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:24.744474 kubelet[2244]: I0130 13:53:24.744436 2244 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:53:24.744880 kubelet[2244]: I0130 13:53:24.744816 2244 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:53:24.745185 kubelet[2244]: I0130 13:53:24.744867 2244 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:53:24.745419 kubelet[2244]: I0130 13:53:24.745211 2244 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:53:24.745419 kubelet[2244]: I0130 13:53:24.745230 2244 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:53:24.746393 kubelet[2244]: I0130 13:53:24.746350 2244 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:24.747726 kubelet[2244]: I0130 13:53:24.747596 2244 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:53:24.747726 kubelet[2244]: I0130 13:53:24.747627 2244 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:53:24.747726 kubelet[2244]: I0130 13:53:24.747659 2244 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:53:24.747726 kubelet[2244]: I0130 13:53:24.747686 2244 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:53:24.752104 kubelet[2244]: W0130 13:53:24.751161 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:24.752104 kubelet[2244]: E0130 13:53:24.751267 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:24.755517 kubelet[2244]: W0130 13:53:24.755420 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:24.755605 kubelet[2244]: E0130 13:53:24.755560 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:24.756048 kubelet[2244]: I0130 13:53:24.756026 2244 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:53:24.758297 kubelet[2244]: I0130 13:53:24.758272 2244 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:53:24.758484 kubelet[2244]: W0130 13:53:24.758468 2244 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:53:24.760433 kubelet[2244]: I0130 13:53:24.760390 2244 server.go:1264] "Started kubelet" Jan 30 13:53:24.773826 kubelet[2244]: E0130 13:53:24.773641 2244 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal.181f7ccdda4485ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal,UID:ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal,},FirstTimestamp:2025-01-30 13:53:24.760364459 +0000 UTC m=+0.845879981,LastTimestamp:2025-01-30 13:53:24.760364459 +0000 UTC m=+0.845879981,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal,}" Jan 30 13:53:24.774105 kubelet[2244]: I0130 13:53:24.774003 2244 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:53:24.775434 kubelet[2244]: I0130 13:53:24.774630 2244 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:53:24.775993 kubelet[2244]: I0130 13:53:24.775961 2244 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:53:24.777620 kubelet[2244]: I0130 13:53:24.776588 2244 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:53:24.779596 kubelet[2244]: I0130 13:53:24.779569 2244 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:53:24.788301 kubelet[2244]: I0130 13:53:24.788266 2244 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:53:24.788686 kubelet[2244]: I0130 13:53:24.788665 2244 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:53:24.788914 kubelet[2244]: I0130 13:53:24.788900 2244 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:53:24.789565 kubelet[2244]: W0130 13:53:24.789509 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:24.789734 kubelet[2244]: E0130 13:53:24.789711 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:24.790754 kubelet[2244]: E0130 13:53:24.790692 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.23:6443: connect: connection refused" interval="200ms" Jan 30 13:53:24.791162 kubelet[2244]: I0130 13:53:24.791140 2244 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:53:24.791486 kubelet[2244]: I0130 13:53:24.791461 2244 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:53:24.794130 kubelet[2244]: E0130 13:53:24.792923 2244 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:53:24.794130 kubelet[2244]: I0130 13:53:24.793307 2244 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:53:24.812011 kubelet[2244]: I0130 13:53:24.811933 2244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:53:24.814196 kubelet[2244]: I0130 13:53:24.814152 2244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:53:24.814196 kubelet[2244]: I0130 13:53:24.814182 2244 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:53:24.814355 kubelet[2244]: I0130 13:53:24.814207 2244 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:53:24.814355 kubelet[2244]: E0130 13:53:24.814274 2244 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:53:24.822628 kubelet[2244]: W0130 13:53:24.822508 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:24.823872 kubelet[2244]: E0130 13:53:24.823846 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:24.839562 kubelet[2244]: I0130 13:53:24.839532 2244 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:53:24.839562 kubelet[2244]: I0130 13:53:24.839556 2244 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:53:24.839747 kubelet[2244]: I0130 13:53:24.839582 2244 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:24.866688 kubelet[2244]: I0130 13:53:24.866648 2244 policy_none.go:49] "None policy: Start" Jan 30 13:53:24.868654 kubelet[2244]: I0130 13:53:24.867938 2244 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:53:24.868654 kubelet[2244]: I0130 13:53:24.867973 2244 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:53:24.887219 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:53:24.899790 kubelet[2244]: I0130 13:53:24.899201 2244 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:24.899790 kubelet[2244]: E0130 13:53:24.899666 2244 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.23:6443/api/v1/nodes\": dial tcp 10.128.0.23:6443: connect: connection refused" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:24.902342 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:53:24.906855 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:53:24.914441 kubelet[2244]: E0130 13:53:24.914383 2244 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:53:24.917137 kubelet[2244]: I0130 13:53:24.917109 2244 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:53:24.918490 kubelet[2244]: I0130 13:53:24.917519 2244 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:53:24.918490 kubelet[2244]: I0130 13:53:24.917689 2244 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:53:24.920551 kubelet[2244]: E0130 13:53:24.920526 2244 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" not found" Jan 30 13:53:24.991784 kubelet[2244]: E0130 13:53:24.991614 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.23:6443: connect: connection refused" interval="400ms" Jan 30 13:53:25.108208 kubelet[2244]: I0130 13:53:25.108168 2244 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.108897 kubelet[2244]: E0130 13:53:25.108853 2244 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.23:6443/api/v1/nodes\": dial tcp 10.128.0.23:6443: connect: connection refused" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.115098 kubelet[2244]: I0130 13:53:25.115026 2244 topology_manager.go:215] "Topology Admit Handler" podUID="64c5fa53997a02e37695bb26880ef6d9" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.124837 kubelet[2244]: I0130 13:53:25.124709 2244 topology_manager.go:215] "Topology Admit Handler" podUID="a70925bd8c7125f90d0ac0372a54e32f" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.130216 kubelet[2244]: I0130 13:53:25.129853 2244 topology_manager.go:215] "Topology Admit Handler" podUID="beacb21b7fe8b9c68976753a85f37b89" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.137365 systemd[1]: Created slice kubepods-burstable-pod64c5fa53997a02e37695bb26880ef6d9.slice - libcontainer container kubepods-burstable-pod64c5fa53997a02e37695bb26880ef6d9.slice. Jan 30 13:53:25.171475 systemd[1]: Created slice kubepods-burstable-poda70925bd8c7125f90d0ac0372a54e32f.slice - libcontainer container kubepods-burstable-poda70925bd8c7125f90d0ac0372a54e32f.slice. Jan 30 13:53:25.186502 systemd[1]: Created slice kubepods-burstable-podbeacb21b7fe8b9c68976753a85f37b89.slice - libcontainer container kubepods-burstable-podbeacb21b7fe8b9c68976753a85f37b89.slice. Jan 30 13:53:25.191629 kubelet[2244]: I0130 13:53:25.191581 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/beacb21b7fe8b9c68976753a85f37b89-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"beacb21b7fe8b9c68976753a85f37b89\") " pod="kube-system/kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.191629 kubelet[2244]: I0130 13:53:25.191630 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64c5fa53997a02e37695bb26880ef6d9-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"64c5fa53997a02e37695bb26880ef6d9\") " pod="kube-system/kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.191830 kubelet[2244]: I0130 13:53:25.191664 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a70925bd8c7125f90d0ac0372a54e32f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"a70925bd8c7125f90d0ac0372a54e32f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.191830 kubelet[2244]: I0130 13:53:25.191691 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a70925bd8c7125f90d0ac0372a54e32f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"a70925bd8c7125f90d0ac0372a54e32f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.191830 kubelet[2244]: I0130 13:53:25.191730 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a70925bd8c7125f90d0ac0372a54e32f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"a70925bd8c7125f90d0ac0372a54e32f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.191830 kubelet[2244]: I0130 13:53:25.191757 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64c5fa53997a02e37695bb26880ef6d9-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"64c5fa53997a02e37695bb26880ef6d9\") " pod="kube-system/kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.192036 kubelet[2244]: I0130 13:53:25.191787 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64c5fa53997a02e37695bb26880ef6d9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"64c5fa53997a02e37695bb26880ef6d9\") " pod="kube-system/kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.192036 kubelet[2244]: I0130 13:53:25.191829 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a70925bd8c7125f90d0ac0372a54e32f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"a70925bd8c7125f90d0ac0372a54e32f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.192036 kubelet[2244]: I0130 13:53:25.191861 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a70925bd8c7125f90d0ac0372a54e32f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"a70925bd8c7125f90d0ac0372a54e32f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.392614 kubelet[2244]: E0130 13:53:25.392538 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.23:6443: connect: connection refused" interval="800ms" Jan 30 13:53:25.461219 containerd[1455]: time="2025-01-30T13:53:25.461137573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal,Uid:64c5fa53997a02e37695bb26880ef6d9,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:25.484218 containerd[1455]: time="2025-01-30T13:53:25.484140362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal,Uid:a70925bd8c7125f90d0ac0372a54e32f,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:25.491376 containerd[1455]: time="2025-01-30T13:53:25.491280830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal,Uid:beacb21b7fe8b9c68976753a85f37b89,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:25.517419 kubelet[2244]: I0130 13:53:25.517353 2244 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.517797 kubelet[2244]: E0130 13:53:25.517747 2244 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.23:6443/api/v1/nodes\": dial tcp 10.128.0.23:6443: connect: connection refused" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:25.647344 kubelet[2244]: W0130 13:53:25.647166 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:25.647344 kubelet[2244]: E0130 13:53:25.647227 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:25.824315 kubelet[2244]: W0130 13:53:25.824220 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:25.824315 kubelet[2244]: E0130 13:53:25.824314 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:25.912854 kubelet[2244]: W0130 13:53:25.912656 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:25.912854 kubelet[2244]: E0130 13:53:25.912763 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:25.914034 kubelet[2244]: W0130 13:53:25.913952 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:25.914034 kubelet[2244]: E0130 13:53:25.914005 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:26.193787 kubelet[2244]: E0130 13:53:26.193630 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.23:6443: connect: connection refused" interval="1.6s" Jan 30 13:53:26.323274 kubelet[2244]: I0130 13:53:26.323218 2244 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:26.323667 kubelet[2244]: E0130 13:53:26.323614 2244 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.23:6443/api/v1/nodes\": dial tcp 10.128.0.23:6443: connect: connection refused" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:26.779531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount294773473.mount: Deactivated successfully. Jan 30 13:53:26.788757 containerd[1455]: time="2025-01-30T13:53:26.788646942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:26.790118 containerd[1455]: time="2025-01-30T13:53:26.790052055Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:26.791293 containerd[1455]: time="2025-01-30T13:53:26.791235925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:53:26.791982 containerd[1455]: time="2025-01-30T13:53:26.791848128Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 30 13:53:26.793651 containerd[1455]: time="2025-01-30T13:53:26.793584480Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:26.795582 containerd[1455]: time="2025-01-30T13:53:26.795106914Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:26.795967 kubelet[2244]: E0130 13:53:26.795938 2244 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:26.796647 containerd[1455]: time="2025-01-30T13:53:26.796559798Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:53:26.798914 containerd[1455]: time="2025-01-30T13:53:26.798800772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:26.802034 containerd[1455]: time="2025-01-30T13:53:26.801771678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.340525312s" Jan 30 13:53:26.805617 containerd[1455]: time="2025-01-30T13:53:26.805556640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.321288479s" Jan 30 13:53:26.808960 containerd[1455]: time="2025-01-30T13:53:26.808901781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.317452202s" Jan 30 13:53:26.996745 containerd[1455]: time="2025-01-30T13:53:26.996185244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:26.996745 containerd[1455]: time="2025-01-30T13:53:26.996254683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:26.996745 containerd[1455]: time="2025-01-30T13:53:26.996271261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:26.996745 containerd[1455]: time="2025-01-30T13:53:26.996383252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:27.000536 containerd[1455]: time="2025-01-30T13:53:26.999661358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:27.000536 containerd[1455]: time="2025-01-30T13:53:26.999764078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:27.000536 containerd[1455]: time="2025-01-30T13:53:26.999793024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:27.000536 containerd[1455]: time="2025-01-30T13:53:27.000217242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:27.004045 containerd[1455]: time="2025-01-30T13:53:27.002903830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:27.004045 containerd[1455]: time="2025-01-30T13:53:27.002983949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:27.004045 containerd[1455]: time="2025-01-30T13:53:27.003005042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:27.004265 containerd[1455]: time="2025-01-30T13:53:27.004066015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:27.038327 systemd[1]: Started cri-containerd-fd2a1a05826b7bc09d3405cef8dc249453fd92605db2677cbf5ae0136b01e80f.scope - libcontainer container fd2a1a05826b7bc09d3405cef8dc249453fd92605db2677cbf5ae0136b01e80f. Jan 30 13:53:27.045599 systemd[1]: Started cri-containerd-45b0bf86f481fabf114f409ef4aadfe6f366ef1503799e13f2b67d8758406938.scope - libcontainer container 45b0bf86f481fabf114f409ef4aadfe6f366ef1503799e13f2b67d8758406938. Jan 30 13:53:27.073575 systemd[1]: Started cri-containerd-1b083f95dab6c8f4817d1620f094ac3f5a839115298d4534e368e316ec813bf6.scope - libcontainer container 1b083f95dab6c8f4817d1620f094ac3f5a839115298d4534e368e316ec813bf6. Jan 30 13:53:27.153325 containerd[1455]: time="2025-01-30T13:53:27.153032414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal,Uid:a70925bd8c7125f90d0ac0372a54e32f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd2a1a05826b7bc09d3405cef8dc249453fd92605db2677cbf5ae0136b01e80f\"" Jan 30 13:53:27.166420 kubelet[2244]: E0130 13:53:27.165045 2244 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flat" Jan 30 13:53:27.176726 containerd[1455]: time="2025-01-30T13:53:27.176586732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal,Uid:beacb21b7fe8b9c68976753a85f37b89,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b083f95dab6c8f4817d1620f094ac3f5a839115298d4534e368e316ec813bf6\"" Jan 30 13:53:27.183756 containerd[1455]: time="2025-01-30T13:53:27.183706708Z" level=info msg="CreateContainer within sandbox \"fd2a1a05826b7bc09d3405cef8dc249453fd92605db2677cbf5ae0136b01e80f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:53:27.187355 kubelet[2244]: E0130 13:53:27.187310 2244 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-21291" Jan 30 13:53:27.204698 containerd[1455]: time="2025-01-30T13:53:27.204647920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal,Uid:64c5fa53997a02e37695bb26880ef6d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"45b0bf86f481fabf114f409ef4aadfe6f366ef1503799e13f2b67d8758406938\"" Jan 30 13:53:27.205843 containerd[1455]: time="2025-01-30T13:53:27.205799679Z" level=info msg="CreateContainer within sandbox \"1b083f95dab6c8f4817d1620f094ac3f5a839115298d4534e368e316ec813bf6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:53:27.207159 kubelet[2244]: E0130 13:53:27.206893 2244 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-21291" Jan 30 13:53:27.208891 containerd[1455]: time="2025-01-30T13:53:27.208850848Z" level=info msg="CreateContainer within sandbox \"fd2a1a05826b7bc09d3405cef8dc249453fd92605db2677cbf5ae0136b01e80f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2e45373d8c4c47d21cdbf4fbfc2e70dfebfb13328ce5e5cd9a043788edcc5466\"" Jan 30 13:53:27.210334 containerd[1455]: time="2025-01-30T13:53:27.210258363Z" level=info msg="StartContainer for \"2e45373d8c4c47d21cdbf4fbfc2e70dfebfb13328ce5e5cd9a043788edcc5466\"" Jan 30 13:53:27.211163 containerd[1455]: time="2025-01-30T13:53:27.210882959Z" level=info msg="CreateContainer within sandbox \"45b0bf86f481fabf114f409ef4aadfe6f366ef1503799e13f2b67d8758406938\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:53:27.231878 containerd[1455]: time="2025-01-30T13:53:27.231819962Z" level=info msg="CreateContainer within sandbox \"1b083f95dab6c8f4817d1620f094ac3f5a839115298d4534e368e316ec813bf6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9499e9574c6d49dc7d0e5050ac9f9a58d595083050841cfb0ebad80c6154ecf0\"" Jan 30 13:53:27.233353 containerd[1455]: time="2025-01-30T13:53:27.233311285Z" level=info msg="StartContainer for \"9499e9574c6d49dc7d0e5050ac9f9a58d595083050841cfb0ebad80c6154ecf0\"" Jan 30 13:53:27.237781 containerd[1455]: time="2025-01-30T13:53:27.237621566Z" level=info msg="CreateContainer within sandbox \"45b0bf86f481fabf114f409ef4aadfe6f366ef1503799e13f2b67d8758406938\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12b4dd13218423406a9092e6fe9518e60eb8ebc9a9a4dbb36bd7bf5cdb59a6ae\"" Jan 30 13:53:27.239775 containerd[1455]: time="2025-01-30T13:53:27.239622542Z" level=info msg="StartContainer for \"12b4dd13218423406a9092e6fe9518e60eb8ebc9a9a4dbb36bd7bf5cdb59a6ae\"" Jan 30 13:53:27.259361 systemd[1]: Started cri-containerd-2e45373d8c4c47d21cdbf4fbfc2e70dfebfb13328ce5e5cd9a043788edcc5466.scope - libcontainer container 2e45373d8c4c47d21cdbf4fbfc2e70dfebfb13328ce5e5cd9a043788edcc5466. Jan 30 13:53:27.305916 systemd[1]: Started cri-containerd-9499e9574c6d49dc7d0e5050ac9f9a58d595083050841cfb0ebad80c6154ecf0.scope - libcontainer container 9499e9574c6d49dc7d0e5050ac9f9a58d595083050841cfb0ebad80c6154ecf0. Jan 30 13:53:27.318321 systemd[1]: Started cri-containerd-12b4dd13218423406a9092e6fe9518e60eb8ebc9a9a4dbb36bd7bf5cdb59a6ae.scope - libcontainer container 12b4dd13218423406a9092e6fe9518e60eb8ebc9a9a4dbb36bd7bf5cdb59a6ae. Jan 30 13:53:27.394110 containerd[1455]: time="2025-01-30T13:53:27.394028533Z" level=info msg="StartContainer for \"2e45373d8c4c47d21cdbf4fbfc2e70dfebfb13328ce5e5cd9a043788edcc5466\" returns successfully" Jan 30 13:53:27.414136 kubelet[2244]: W0130 13:53:27.413413 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:27.414136 kubelet[2244]: E0130 13:53:27.413508 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:27.433793 containerd[1455]: time="2025-01-30T13:53:27.432964135Z" level=info msg="StartContainer for \"9499e9574c6d49dc7d0e5050ac9f9a58d595083050841cfb0ebad80c6154ecf0\" returns successfully" Jan 30 13:53:27.442615 containerd[1455]: time="2025-01-30T13:53:27.442555357Z" level=info msg="StartContainer for \"12b4dd13218423406a9092e6fe9518e60eb8ebc9a9a4dbb36bd7bf5cdb59a6ae\" returns successfully" Jan 30 13:53:27.529500 kubelet[2244]: W0130 13:53:27.529402 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:27.529500 kubelet[2244]: E0130 13:53:27.529504 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.23:6443: connect: connection refused Jan 30 13:53:27.932648 kubelet[2244]: I0130 13:53:27.932574 2244 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:30.089035 kubelet[2244]: E0130 13:53:30.088963 2244 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:30.143940 kubelet[2244]: I0130 13:53:30.142642 2244 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:30.755060 kubelet[2244]: I0130 13:53:30.754758 2244 apiserver.go:52] "Watching apiserver" Jan 30 13:53:30.789807 kubelet[2244]: I0130 13:53:30.789722 2244 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:53:31.208624 kubelet[2244]: W0130 13:53:31.208541 2244 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 13:53:32.387364 systemd[1]: Reloading requested from client PID 2518 ('systemctl') (unit session-7.scope)... Jan 30 13:53:32.387388 systemd[1]: Reloading... Jan 30 13:53:32.568114 zram_generator::config[2554]: No configuration found. Jan 30 13:53:32.726528 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:53:32.859709 systemd[1]: Reloading finished in 471 ms. Jan 30 13:53:32.921516 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:32.933837 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:53:32.934283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:32.934458 systemd[1]: kubelet.service: Consumed 1.382s CPU time, 116.4M memory peak, 0B memory swap peak. Jan 30 13:53:32.941509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:33.235601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:33.250196 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:53:33.323486 kubelet[2606]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:33.323486 kubelet[2606]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:53:33.323486 kubelet[2606]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:33.324115 kubelet[2606]: I0130 13:53:33.323613 2606 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:53:33.334181 kubelet[2606]: I0130 13:53:33.334131 2606 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:53:33.334181 kubelet[2606]: I0130 13:53:33.334163 2606 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:53:33.335004 kubelet[2606]: I0130 13:53:33.334481 2606 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:53:33.337463 kubelet[2606]: I0130 13:53:33.337017 2606 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:53:33.339315 kubelet[2606]: I0130 13:53:33.339274 2606 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:53:33.351767 kubelet[2606]: I0130 13:53:33.351586 2606 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:53:33.352966 kubelet[2606]: I0130 13:53:33.352393 2606 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:53:33.352966 kubelet[2606]: I0130 13:53:33.352467 2606 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:53:33.352966 kubelet[2606]: I0130 13:53:33.352749 2606 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:53:33.352966 kubelet[2606]: I0130 13:53:33.352768 2606 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:53:33.353454 kubelet[2606]: I0130 13:53:33.352840 2606 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:33.355160 kubelet[2606]: I0130 13:53:33.353887 2606 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:53:33.355160 kubelet[2606]: I0130 13:53:33.354547 2606 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:53:33.355160 kubelet[2606]: I0130 13:53:33.354588 2606 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:53:33.355160 kubelet[2606]: I0130 13:53:33.354611 2606 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:53:33.358997 kubelet[2606]: I0130 13:53:33.358971 2606 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:53:33.360137 kubelet[2606]: I0130 13:53:33.360112 2606 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:53:33.364399 kubelet[2606]: I0130 13:53:33.361315 2606 server.go:1264] "Started kubelet" Jan 30 13:53:33.367980 kubelet[2606]: I0130 13:53:33.367205 2606 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:53:33.390616 kubelet[2606]: I0130 13:53:33.388705 2606 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:53:33.390616 kubelet[2606]: I0130 13:53:33.390523 2606 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:53:33.397836 kubelet[2606]: I0130 13:53:33.395718 2606 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:53:33.397836 kubelet[2606]: I0130 13:53:33.396037 2606 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:53:33.402997 kubelet[2606]: I0130 13:53:33.402943 2606 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:53:33.422369 kubelet[2606]: I0130 13:53:33.403126 2606 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:53:33.422369 kubelet[2606]: I0130 13:53:33.422121 2606 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:53:33.422369 kubelet[2606]: I0130 13:53:33.407786 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:53:33.425714 kubelet[2606]: I0130 13:53:33.425676 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:53:33.425885 kubelet[2606]: I0130 13:53:33.425872 2606 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:53:33.426005 kubelet[2606]: I0130 13:53:33.425993 2606 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:53:33.426195 kubelet[2606]: E0130 13:53:33.426163 2606 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:53:33.439012 kubelet[2606]: I0130 13:53:33.438974 2606 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:53:33.441611 kubelet[2606]: I0130 13:53:33.439438 2606 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:53:33.445270 kubelet[2606]: E0130 13:53:33.443899 2606 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:53:33.452372 kubelet[2606]: I0130 13:53:33.452339 2606 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:53:33.511509 kubelet[2606]: I0130 13:53:33.511219 2606 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.515843 kubelet[2606]: I0130 13:53:33.515721 2606 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:53:33.515843 kubelet[2606]: I0130 13:53:33.515764 2606 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:53:33.515843 kubelet[2606]: I0130 13:53:33.515794 2606 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:33.517575 kubelet[2606]: I0130 13:53:33.517341 2606 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:53:33.517575 kubelet[2606]: I0130 13:53:33.517384 2606 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:53:33.517575 kubelet[2606]: I0130 13:53:33.517415 2606 policy_none.go:49] "None policy: Start" Jan 30 13:53:33.519387 kubelet[2606]: I0130 13:53:33.519358 2606 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:53:33.519537 kubelet[2606]: I0130 13:53:33.519404 2606 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:53:33.520321 kubelet[2606]: I0130 13:53:33.519611 2606 state_mem.go:75] "Updated machine memory state" Jan 30 13:53:33.532202 kubelet[2606]: E0130 13:53:33.532122 2606 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:53:33.535685 kubelet[2606]: I0130 13:53:33.534024 2606 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.535685 kubelet[2606]: I0130 13:53:33.534938 2606 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.548394 kubelet[2606]: I0130 13:53:33.548363 2606 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:53:33.548802 kubelet[2606]: I0130 13:53:33.548764 2606 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:53:33.549037 kubelet[2606]: I0130 13:53:33.549019 2606 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:53:33.733113 kubelet[2606]: I0130 13:53:33.733021 2606 topology_manager.go:215] "Topology Admit Handler" podUID="64c5fa53997a02e37695bb26880ef6d9" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.733561 kubelet[2606]: I0130 13:53:33.733494 2606 topology_manager.go:215] "Topology Admit Handler" podUID="a70925bd8c7125f90d0ac0372a54e32f" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.733691 kubelet[2606]: I0130 13:53:33.733676 2606 topology_manager.go:215] "Topology Admit Handler" podUID="beacb21b7fe8b9c68976753a85f37b89" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.744950 kubelet[2606]: W0130 13:53:33.743670 2606 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 13:53:33.745113 kubelet[2606]: W0130 13:53:33.744968 2606 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 13:53:33.747053 kubelet[2606]: W0130 13:53:33.746941 2606 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 13:53:33.747053 kubelet[2606]: E0130 13:53:33.747047 2606 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.825517 kubelet[2606]: I0130 13:53:33.825233 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a70925bd8c7125f90d0ac0372a54e32f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"a70925bd8c7125f90d0ac0372a54e32f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.825517 kubelet[2606]: I0130 13:53:33.825285 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a70925bd8c7125f90d0ac0372a54e32f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"a70925bd8c7125f90d0ac0372a54e32f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.825517 kubelet[2606]: I0130 13:53:33.825318 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a70925bd8c7125f90d0ac0372a54e32f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"a70925bd8c7125f90d0ac0372a54e32f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.825517 kubelet[2606]: I0130 13:53:33.825355 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64c5fa53997a02e37695bb26880ef6d9-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"64c5fa53997a02e37695bb26880ef6d9\") " pod="kube-system/kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.825847 kubelet[2606]: I0130 13:53:33.825401 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64c5fa53997a02e37695bb26880ef6d9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"64c5fa53997a02e37695bb26880ef6d9\") " pod="kube-system/kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.825847 kubelet[2606]: I0130 13:53:33.825434 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a70925bd8c7125f90d0ac0372a54e32f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"a70925bd8c7125f90d0ac0372a54e32f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.825963 kubelet[2606]: I0130 13:53:33.825930 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a70925bd8c7125f90d0ac0372a54e32f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"a70925bd8c7125f90d0ac0372a54e32f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.826022 kubelet[2606]: I0130 13:53:33.825988 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/beacb21b7fe8b9c68976753a85f37b89-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"beacb21b7fe8b9c68976753a85f37b89\") " pod="kube-system/kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:33.826090 kubelet[2606]: I0130 13:53:33.826021 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64c5fa53997a02e37695bb26880ef6d9-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal\" (UID: \"64c5fa53997a02e37695bb26880ef6d9\") " pod="kube-system/kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:53:34.358538 kubelet[2606]: I0130 13:53:34.358182 2606 apiserver.go:52] "Watching apiserver" Jan 30 13:53:34.422356 kubelet[2606]: I0130 13:53:34.422278 2606 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:53:34.457781 kubelet[2606]: I0130 13:53:34.457478 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" podStartSLOduration=1.457451509 podStartE2EDuration="1.457451509s" podCreationTimestamp="2025-01-30 13:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:34.440714379 +0000 UTC m=+1.184213528" watchObservedRunningTime="2025-01-30 13:53:34.457451509 +0000 UTC m=+1.200950636" Jan 30 13:53:34.457781 kubelet[2606]: I0130 13:53:34.457636 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" podStartSLOduration=1.457628607 podStartE2EDuration="1.457628607s" podCreationTimestamp="2025-01-30 13:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:34.457577583 +0000 UTC m=+1.201076705" watchObservedRunningTime="2025-01-30 13:53:34.457628607 +0000 UTC m=+1.201127735" Jan 30 13:53:34.528380 kubelet[2606]: I0130 13:53:34.528301 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" podStartSLOduration=3.528278489 podStartE2EDuration="3.528278489s" podCreationTimestamp="2025-01-30 13:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:34.483701223 +0000 UTC m=+1.227200349" watchObservedRunningTime="2025-01-30 13:53:34.528278489 +0000 UTC m=+1.271777615" Jan 30 13:53:37.580237 update_engine[1446]: I20250130 13:53:37.580129 1446 update_attempter.cc:509] Updating boot flags... Jan 30 13:53:37.649200 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2674) Jan 30 13:53:37.785715 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2670) Jan 30 13:53:37.913488 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2670) Jan 30 13:53:39.342179 sudo[1706]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:39.384072 sshd[1700]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:39.389654 systemd[1]: sshd@6-10.128.0.23:22-139.178.68.195:47730.service: Deactivated successfully. Jan 30 13:53:39.392838 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:53:39.393160 systemd[1]: session-7.scope: Consumed 5.752s CPU time, 191.6M memory peak, 0B memory swap peak. Jan 30 13:53:39.395395 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:53:39.397433 systemd-logind[1445]: Removed session 7. Jan 30 13:53:48.470121 kubelet[2606]: I0130 13:53:48.468863 2606 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:53:48.472162 containerd[1455]: time="2025-01-30T13:53:48.471625398Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:53:48.473615 kubelet[2606]: I0130 13:53:48.473183 2606 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:53:48.839467 kubelet[2606]: I0130 13:53:48.839403 2606 topology_manager.go:215] "Topology Admit Handler" podUID="ca75e77d-f247-4d42-b331-4d99ce48b8a9" podNamespace="kube-system" podName="kube-proxy-xdh9z" Jan 30 13:53:48.855734 systemd[1]: Created slice kubepods-besteffort-podca75e77d_f247_4d42_b331_4d99ce48b8a9.slice - libcontainer container kubepods-besteffort-podca75e77d_f247_4d42_b331_4d99ce48b8a9.slice. Jan 30 13:53:48.860413 kubelet[2606]: I0130 13:53:48.860368 2606 topology_manager.go:215] "Topology Admit Handler" podUID="5bbe47eb-931b-4e9d-b44a-491d89681e5b" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-v7zq2" Jan 30 13:53:48.879623 systemd[1]: Created slice kubepods-besteffort-pod5bbe47eb_931b_4e9d_b44a_491d89681e5b.slice - libcontainer container kubepods-besteffort-pod5bbe47eb_931b_4e9d_b44a_491d89681e5b.slice. Jan 30 13:53:49.028454 kubelet[2606]: I0130 13:53:49.028371 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca75e77d-f247-4d42-b331-4d99ce48b8a9-xtables-lock\") pod \"kube-proxy-xdh9z\" (UID: \"ca75e77d-f247-4d42-b331-4d99ce48b8a9\") " pod="kube-system/kube-proxy-xdh9z" Jan 30 13:53:49.028454 kubelet[2606]: I0130 13:53:49.028432 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5bbe47eb-931b-4e9d-b44a-491d89681e5b-var-lib-calico\") pod \"tigera-operator-7bc55997bb-v7zq2\" (UID: \"5bbe47eb-931b-4e9d-b44a-491d89681e5b\") " pod="tigera-operator/tigera-operator-7bc55997bb-v7zq2" Jan 30 13:53:49.028705 kubelet[2606]: I0130 13:53:49.028478 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca75e77d-f247-4d42-b331-4d99ce48b8a9-kube-proxy\") pod \"kube-proxy-xdh9z\" (UID: \"ca75e77d-f247-4d42-b331-4d99ce48b8a9\") " pod="kube-system/kube-proxy-xdh9z" Jan 30 13:53:49.028705 kubelet[2606]: I0130 13:53:49.028508 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49r7j\" (UniqueName: \"kubernetes.io/projected/5bbe47eb-931b-4e9d-b44a-491d89681e5b-kube-api-access-49r7j\") pod \"tigera-operator-7bc55997bb-v7zq2\" (UID: \"5bbe47eb-931b-4e9d-b44a-491d89681e5b\") " pod="tigera-operator/tigera-operator-7bc55997bb-v7zq2" Jan 30 13:53:49.028705 kubelet[2606]: I0130 13:53:49.028532 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca75e77d-f247-4d42-b331-4d99ce48b8a9-lib-modules\") pod \"kube-proxy-xdh9z\" (UID: \"ca75e77d-f247-4d42-b331-4d99ce48b8a9\") " pod="kube-system/kube-proxy-xdh9z" Jan 30 13:53:49.028705 kubelet[2606]: I0130 13:53:49.028556 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrbh\" (UniqueName: \"kubernetes.io/projected/ca75e77d-f247-4d42-b331-4d99ce48b8a9-kube-api-access-gwrbh\") pod \"kube-proxy-xdh9z\" (UID: \"ca75e77d-f247-4d42-b331-4d99ce48b8a9\") " pod="kube-system/kube-proxy-xdh9z" Jan 30 13:53:49.169427 containerd[1455]: time="2025-01-30T13:53:49.169277206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xdh9z,Uid:ca75e77d-f247-4d42-b331-4d99ce48b8a9,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:49.188591 containerd[1455]: time="2025-01-30T13:53:49.187163498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-v7zq2,Uid:5bbe47eb-931b-4e9d-b44a-491d89681e5b,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:53:49.217334 containerd[1455]: time="2025-01-30T13:53:49.217208492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:49.217493 containerd[1455]: time="2025-01-30T13:53:49.217364380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:49.218586 containerd[1455]: time="2025-01-30T13:53:49.217452202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:49.218843 containerd[1455]: time="2025-01-30T13:53:49.218746061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:49.257633 containerd[1455]: time="2025-01-30T13:53:49.257498438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:49.259582 containerd[1455]: time="2025-01-30T13:53:49.259273139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:49.259582 containerd[1455]: time="2025-01-30T13:53:49.259312479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:49.259582 containerd[1455]: time="2025-01-30T13:53:49.259448761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:49.261952 systemd[1]: Started cri-containerd-acbd26137f2e6ce8fee45b071ca3109db594659cdf86e889c4281a8c02498e4e.scope - libcontainer container acbd26137f2e6ce8fee45b071ca3109db594659cdf86e889c4281a8c02498e4e. Jan 30 13:53:49.297361 systemd[1]: Started cri-containerd-f9264130dd92571744fad44790547eeb13ea0e300a202c11b13d11b22e8e4141.scope - libcontainer container f9264130dd92571744fad44790547eeb13ea0e300a202c11b13d11b22e8e4141. Jan 30 13:53:49.314672 containerd[1455]: time="2025-01-30T13:53:49.314529839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xdh9z,Uid:ca75e77d-f247-4d42-b331-4d99ce48b8a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"acbd26137f2e6ce8fee45b071ca3109db594659cdf86e889c4281a8c02498e4e\"" Jan 30 13:53:49.341869 containerd[1455]: time="2025-01-30T13:53:49.341472273Z" level=info msg="CreateContainer within sandbox \"acbd26137f2e6ce8fee45b071ca3109db594659cdf86e889c4281a8c02498e4e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:53:49.368291 containerd[1455]: time="2025-01-30T13:53:49.368030398Z" level=info msg="CreateContainer within sandbox \"acbd26137f2e6ce8fee45b071ca3109db594659cdf86e889c4281a8c02498e4e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"822ee30d2c06d48e11f48e5b1415422615b607e4dc92b8e558da137957a9514b\"" Jan 30 13:53:49.371571 containerd[1455]: time="2025-01-30T13:53:49.371499805Z" level=info msg="StartContainer for \"822ee30d2c06d48e11f48e5b1415422615b607e4dc92b8e558da137957a9514b\"" Jan 30 13:53:49.380493 containerd[1455]: time="2025-01-30T13:53:49.380047979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-v7zq2,Uid:5bbe47eb-931b-4e9d-b44a-491d89681e5b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f9264130dd92571744fad44790547eeb13ea0e300a202c11b13d11b22e8e4141\"" Jan 30 13:53:49.384028 containerd[1455]: time="2025-01-30T13:53:49.383982939Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:53:49.418311 systemd[1]: Started cri-containerd-822ee30d2c06d48e11f48e5b1415422615b607e4dc92b8e558da137957a9514b.scope - libcontainer container 822ee30d2c06d48e11f48e5b1415422615b607e4dc92b8e558da137957a9514b. Jan 30 13:53:49.463665 containerd[1455]: time="2025-01-30T13:53:49.463506576Z" level=info msg="StartContainer for \"822ee30d2c06d48e11f48e5b1415422615b607e4dc92b8e558da137957a9514b\" returns successfully" Jan 30 13:53:50.952868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641517604.mount: Deactivated successfully. Jan 30 13:53:52.500885 containerd[1455]: time="2025-01-30T13:53:52.500814937Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:52.502458 containerd[1455]: time="2025-01-30T13:53:52.502369587Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:53:52.503853 containerd[1455]: time="2025-01-30T13:53:52.503785057Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:52.508071 containerd[1455]: time="2025-01-30T13:53:52.508002134Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:52.509549 containerd[1455]: time="2025-01-30T13:53:52.509355516Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.125300321s" Jan 30 13:53:52.509549 containerd[1455]: time="2025-01-30T13:53:52.509403486Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:53:52.512977 containerd[1455]: time="2025-01-30T13:53:52.512934760Z" level=info msg="CreateContainer within sandbox \"f9264130dd92571744fad44790547eeb13ea0e300a202c11b13d11b22e8e4141\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:53:52.529763 containerd[1455]: time="2025-01-30T13:53:52.529699830Z" level=info msg="CreateContainer within sandbox \"f9264130dd92571744fad44790547eeb13ea0e300a202c11b13d11b22e8e4141\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6007bbb7fd1b78b8d7e99b2007217ce5d07ef7353fec6f704d938f254e15969e\"" Jan 30 13:53:52.531457 containerd[1455]: time="2025-01-30T13:53:52.531033336Z" level=info msg="StartContainer for \"6007bbb7fd1b78b8d7e99b2007217ce5d07ef7353fec6f704d938f254e15969e\"" Jan 30 13:53:52.576859 systemd[1]: run-containerd-runc-k8s.io-6007bbb7fd1b78b8d7e99b2007217ce5d07ef7353fec6f704d938f254e15969e-runc.gnrJDg.mount: Deactivated successfully. Jan 30 13:53:52.588331 systemd[1]: Started cri-containerd-6007bbb7fd1b78b8d7e99b2007217ce5d07ef7353fec6f704d938f254e15969e.scope - libcontainer container 6007bbb7fd1b78b8d7e99b2007217ce5d07ef7353fec6f704d938f254e15969e. Jan 30 13:53:52.625507 containerd[1455]: time="2025-01-30T13:53:52.625451350Z" level=info msg="StartContainer for \"6007bbb7fd1b78b8d7e99b2007217ce5d07ef7353fec6f704d938f254e15969e\" returns successfully" Jan 30 13:53:53.443961 kubelet[2606]: I0130 13:53:53.443252 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xdh9z" podStartSLOduration=5.443228518 podStartE2EDuration="5.443228518s" podCreationTimestamp="2025-01-30 13:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:49.537542342 +0000 UTC m=+16.281041469" watchObservedRunningTime="2025-01-30 13:53:53.443228518 +0000 UTC m=+20.186727645" Jan 30 13:53:53.560051 kubelet[2606]: I0130 13:53:53.559970 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-v7zq2" podStartSLOduration=2.431597099 podStartE2EDuration="5.559944755s" podCreationTimestamp="2025-01-30 13:53:48 +0000 UTC" firstStartedPulling="2025-01-30 13:53:49.382206707 +0000 UTC m=+16.125705815" lastFinishedPulling="2025-01-30 13:53:52.510554366 +0000 UTC m=+19.254053471" observedRunningTime="2025-01-30 13:53:53.558208415 +0000 UTC m=+20.301707542" watchObservedRunningTime="2025-01-30 13:53:53.559944755 +0000 UTC m=+20.303443882" Jan 30 13:53:55.946197 kubelet[2606]: I0130 13:53:55.944687 2606 topology_manager.go:215] "Topology Admit Handler" podUID="dae3a7d6-38c6-4e8c-8aa4-c4156335f356" podNamespace="calico-system" podName="calico-typha-588665988d-prpvz" Jan 30 13:53:55.961931 systemd[1]: Created slice kubepods-besteffort-poddae3a7d6_38c6_4e8c_8aa4_c4156335f356.slice - libcontainer container kubepods-besteffort-poddae3a7d6_38c6_4e8c_8aa4_c4156335f356.slice. Jan 30 13:53:55.979242 kubelet[2606]: I0130 13:53:55.979184 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdrsj\" (UniqueName: \"kubernetes.io/projected/dae3a7d6-38c6-4e8c-8aa4-c4156335f356-kube-api-access-tdrsj\") pod \"calico-typha-588665988d-prpvz\" (UID: \"dae3a7d6-38c6-4e8c-8aa4-c4156335f356\") " pod="calico-system/calico-typha-588665988d-prpvz" Jan 30 13:53:55.979242 kubelet[2606]: I0130 13:53:55.979246 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3a7d6-38c6-4e8c-8aa4-c4156335f356-tigera-ca-bundle\") pod \"calico-typha-588665988d-prpvz\" (UID: \"dae3a7d6-38c6-4e8c-8aa4-c4156335f356\") " pod="calico-system/calico-typha-588665988d-prpvz" Jan 30 13:53:55.979500 kubelet[2606]: I0130 13:53:55.979277 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dae3a7d6-38c6-4e8c-8aa4-c4156335f356-typha-certs\") pod \"calico-typha-588665988d-prpvz\" (UID: \"dae3a7d6-38c6-4e8c-8aa4-c4156335f356\") " pod="calico-system/calico-typha-588665988d-prpvz" Jan 30 13:53:56.099441 kubelet[2606]: I0130 13:53:56.099382 2606 topology_manager.go:215] "Topology Admit Handler" podUID="a96d4586-88b8-4432-b91b-9da478e7f363" podNamespace="calico-system" podName="calico-node-g86lj" Jan 30 13:53:56.117220 systemd[1]: Created slice kubepods-besteffort-poda96d4586_88b8_4432_b91b_9da478e7f363.slice - libcontainer container kubepods-besteffort-poda96d4586_88b8_4432_b91b_9da478e7f363.slice. Jan 30 13:53:56.183883 kubelet[2606]: I0130 13:53:56.183823 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a96d4586-88b8-4432-b91b-9da478e7f363-node-certs\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.183883 kubelet[2606]: I0130 13:53:56.183887 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a96d4586-88b8-4432-b91b-9da478e7f363-flexvol-driver-host\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.184177 kubelet[2606]: I0130 13:53:56.183920 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlsnd\" (UniqueName: \"kubernetes.io/projected/a96d4586-88b8-4432-b91b-9da478e7f363-kube-api-access-vlsnd\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.184177 kubelet[2606]: I0130 13:53:56.183949 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a96d4586-88b8-4432-b91b-9da478e7f363-lib-modules\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.184177 kubelet[2606]: I0130 13:53:56.183977 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a96d4586-88b8-4432-b91b-9da478e7f363-tigera-ca-bundle\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.184177 kubelet[2606]: I0130 13:53:56.184003 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a96d4586-88b8-4432-b91b-9da478e7f363-policysync\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.184177 kubelet[2606]: I0130 13:53:56.184028 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a96d4586-88b8-4432-b91b-9da478e7f363-cni-bin-dir\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.184434 kubelet[2606]: I0130 13:53:56.184053 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a96d4586-88b8-4432-b91b-9da478e7f363-cni-net-dir\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.184434 kubelet[2606]: I0130 13:53:56.184102 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a96d4586-88b8-4432-b91b-9da478e7f363-var-lib-calico\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.184434 kubelet[2606]: I0130 13:53:56.184136 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a96d4586-88b8-4432-b91b-9da478e7f363-var-run-calico\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.184434 kubelet[2606]: I0130 13:53:56.184164 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a96d4586-88b8-4432-b91b-9da478e7f363-cni-log-dir\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.184434 kubelet[2606]: I0130 13:53:56.184213 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a96d4586-88b8-4432-b91b-9da478e7f363-xtables-lock\") pod \"calico-node-g86lj\" (UID: \"a96d4586-88b8-4432-b91b-9da478e7f363\") " pod="calico-system/calico-node-g86lj" Jan 30 13:53:56.221770 kubelet[2606]: I0130 13:53:56.221568 2606 topology_manager.go:215] "Topology Admit Handler" podUID="4ce51f87-697e-49a4-af41-1b0a623704f3" podNamespace="calico-system" podName="csi-node-driver-9hs4d" Jan 30 13:53:56.222058 kubelet[2606]: E0130 13:53:56.222005 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hs4d" podUID="4ce51f87-697e-49a4-af41-1b0a623704f3" Jan 30 13:53:56.271600 containerd[1455]: time="2025-01-30T13:53:56.271539714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588665988d-prpvz,Uid:dae3a7d6-38c6-4e8c-8aa4-c4156335f356,Namespace:calico-system,Attempt:0,}" Jan 30 13:53:56.289115 kubelet[2606]: I0130 13:53:56.285826 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4ce51f87-697e-49a4-af41-1b0a623704f3-socket-dir\") pod \"csi-node-driver-9hs4d\" (UID: \"4ce51f87-697e-49a4-af41-1b0a623704f3\") " pod="calico-system/csi-node-driver-9hs4d" Jan 30 13:53:56.289115 kubelet[2606]: I0130 13:53:56.285950 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqdbp\" (UniqueName: \"kubernetes.io/projected/4ce51f87-697e-49a4-af41-1b0a623704f3-kube-api-access-fqdbp\") pod \"csi-node-driver-9hs4d\" (UID: \"4ce51f87-697e-49a4-af41-1b0a623704f3\") " pod="calico-system/csi-node-driver-9hs4d" Jan 30 13:53:56.289115 kubelet[2606]: I0130 13:53:56.285997 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ce51f87-697e-49a4-af41-1b0a623704f3-kubelet-dir\") pod \"csi-node-driver-9hs4d\" (UID: \"4ce51f87-697e-49a4-af41-1b0a623704f3\") " pod="calico-system/csi-node-driver-9hs4d" Jan 30 13:53:56.289115 kubelet[2606]: I0130 13:53:56.286064 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4ce51f87-697e-49a4-af41-1b0a623704f3-registration-dir\") pod \"csi-node-driver-9hs4d\" (UID: \"4ce51f87-697e-49a4-af41-1b0a623704f3\") " pod="calico-system/csi-node-driver-9hs4d" Jan 30 13:53:56.294300 kubelet[2606]: I0130 13:53:56.294258 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4ce51f87-697e-49a4-af41-1b0a623704f3-varrun\") pod \"csi-node-driver-9hs4d\" (UID: \"4ce51f87-697e-49a4-af41-1b0a623704f3\") " pod="calico-system/csi-node-driver-9hs4d" Jan 30 13:53:56.300812 kubelet[2606]: E0130 13:53:56.300733 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.301315 kubelet[2606]: W0130 13:53:56.301288 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.303124 kubelet[2606]: E0130 13:53:56.301492 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.303756 kubelet[2606]: E0130 13:53:56.303728 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.303907 kubelet[2606]: W0130 13:53:56.303885 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.304064 kubelet[2606]: E0130 13:53:56.304043 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.304612 kubelet[2606]: E0130 13:53:56.304593 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.304743 kubelet[2606]: W0130 13:53:56.304725 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.304852 kubelet[2606]: E0130 13:53:56.304835 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.306215 kubelet[2606]: E0130 13:53:56.306195 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.306348 kubelet[2606]: W0130 13:53:56.306330 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.306448 kubelet[2606]: E0130 13:53:56.306433 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.306884 kubelet[2606]: E0130 13:53:56.306869 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.306994 kubelet[2606]: W0130 13:53:56.306980 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.308141 kubelet[2606]: E0130 13:53:56.308115 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.323879 kubelet[2606]: E0130 13:53:56.323845 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.324122 kubelet[2606]: W0130 13:53:56.324060 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.324276 kubelet[2606]: E0130 13:53:56.324252 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.378196 containerd[1455]: time="2025-01-30T13:53:56.377348394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:56.378196 containerd[1455]: time="2025-01-30T13:53:56.377438490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:56.381606 containerd[1455]: time="2025-01-30T13:53:56.380625966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:56.381606 containerd[1455]: time="2025-01-30T13:53:56.380860958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:56.399806 kubelet[2606]: E0130 13:53:56.399194 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.399806 kubelet[2606]: W0130 13:53:56.399243 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.399806 kubelet[2606]: E0130 13:53:56.399273 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.401559 kubelet[2606]: E0130 13:53:56.401404 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.401559 kubelet[2606]: W0130 13:53:56.401429 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.402117 kubelet[2606]: E0130 13:53:56.401794 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.402576 kubelet[2606]: E0130 13:53:56.402461 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.402576 kubelet[2606]: W0130 13:53:56.402501 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.402892 kubelet[2606]: E0130 13:53:56.402736 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.403652 kubelet[2606]: E0130 13:53:56.403395 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.403652 kubelet[2606]: W0130 13:53:56.403414 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.404167 kubelet[2606]: E0130 13:53:56.403994 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.404503 kubelet[2606]: E0130 13:53:56.404312 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.404503 kubelet[2606]: W0130 13:53:56.404326 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.404856 kubelet[2606]: E0130 13:53:56.404667 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.405882 kubelet[2606]: E0130 13:53:56.404999 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.405882 kubelet[2606]: W0130 13:53:56.405014 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.405882 kubelet[2606]: E0130 13:53:56.405123 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.406807 kubelet[2606]: E0130 13:53:56.406635 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.406807 kubelet[2606]: W0130 13:53:56.406652 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.407263 kubelet[2606]: E0130 13:53:56.406975 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.408019 kubelet[2606]: E0130 13:53:56.407774 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.408019 kubelet[2606]: W0130 13:53:56.407853 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.409054 kubelet[2606]: E0130 13:53:56.408742 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.409604 kubelet[2606]: E0130 13:53:56.409359 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.409604 kubelet[2606]: W0130 13:53:56.409377 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.409604 kubelet[2606]: E0130 13:53:56.409571 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.410760 kubelet[2606]: E0130 13:53:56.410428 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.410760 kubelet[2606]: W0130 13:53:56.410444 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.410760 kubelet[2606]: E0130 13:53:56.410640 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.411373 kubelet[2606]: E0130 13:53:56.411206 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.411373 kubelet[2606]: W0130 13:53:56.411221 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.411373 kubelet[2606]: E0130 13:53:56.411343 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.412006 kubelet[2606]: E0130 13:53:56.411724 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.412006 kubelet[2606]: W0130 13:53:56.411737 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.412006 kubelet[2606]: E0130 13:53:56.411829 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.412437 kubelet[2606]: E0130 13:53:56.412186 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.412437 kubelet[2606]: W0130 13:53:56.412200 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.412754 kubelet[2606]: E0130 13:53:56.412563 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.412986 kubelet[2606]: E0130 13:53:56.412951 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.412986 kubelet[2606]: W0130 13:53:56.412967 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.413370 kubelet[2606]: E0130 13:53:56.413217 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.413860 kubelet[2606]: E0130 13:53:56.413731 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.413860 kubelet[2606]: W0130 13:53:56.413748 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.414282 kubelet[2606]: E0130 13:53:56.414113 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.414282 kubelet[2606]: E0130 13:53:56.414252 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.414282 kubelet[2606]: W0130 13:53:56.414263 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.414764 kubelet[2606]: E0130 13:53:56.414533 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.415117 kubelet[2606]: E0130 13:53:56.414927 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.415117 kubelet[2606]: W0130 13:53:56.414942 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.415346 kubelet[2606]: E0130 13:53:56.415278 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.415736 kubelet[2606]: E0130 13:53:56.415718 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.416108 kubelet[2606]: W0130 13:53:56.415834 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.416573 kubelet[2606]: E0130 13:53:56.416547 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.417058 kubelet[2606]: E0130 13:53:56.416942 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.417058 kubelet[2606]: W0130 13:53:56.416964 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.417509 kubelet[2606]: E0130 13:53:56.417243 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.417721 kubelet[2606]: E0130 13:53:56.417692 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.417943 kubelet[2606]: W0130 13:53:56.417811 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.418222 kubelet[2606]: E0130 13:53:56.418033 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.418397 kubelet[2606]: E0130 13:53:56.418381 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.418600 kubelet[2606]: W0130 13:53:56.418478 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.418750 kubelet[2606]: E0130 13:53:56.418680 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.419507 kubelet[2606]: E0130 13:53:56.419033 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.419507 kubelet[2606]: W0130 13:53:56.419049 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.421401 systemd[1]: Started cri-containerd-b7a70c52acd63af9436882b442644967218a492999394df034bd0e36a88ef87b.scope - libcontainer container b7a70c52acd63af9436882b442644967218a492999394df034bd0e36a88ef87b. Jan 30 13:53:56.425510 kubelet[2606]: E0130 13:53:56.423919 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.425510 kubelet[2606]: W0130 13:53:56.423944 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.426108 kubelet[2606]: E0130 13:53:56.425778 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.426430 kubelet[2606]: E0130 13:53:56.426242 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.426777 kubelet[2606]: E0130 13:53:56.426757 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.427196 kubelet[2606]: W0130 13:53:56.426891 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.427196 kubelet[2606]: E0130 13:53:56.426921 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.427790 kubelet[2606]: E0130 13:53:56.427759 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.427790 kubelet[2606]: W0130 13:53:56.427782 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.427937 kubelet[2606]: E0130 13:53:56.427800 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.430529 containerd[1455]: time="2025-01-30T13:53:56.430454822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g86lj,Uid:a96d4586-88b8-4432-b91b-9da478e7f363,Namespace:calico-system,Attempt:0,}" Jan 30 13:53:56.460711 kubelet[2606]: E0130 13:53:56.460294 2606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:56.462280 kubelet[2606]: W0130 13:53:56.460480 2606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:56.462280 kubelet[2606]: E0130 13:53:56.462208 2606 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:56.516452 containerd[1455]: time="2025-01-30T13:53:56.514655414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:56.517330 containerd[1455]: time="2025-01-30T13:53:56.516370408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:56.517330 containerd[1455]: time="2025-01-30T13:53:56.516400454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:56.518734 containerd[1455]: time="2025-01-30T13:53:56.517899059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:56.573333 systemd[1]: Started cri-containerd-a0bdde3adaec2e6f13e685c50e9672a2ca5745bb343eef3864b5665c9bec4855.scope - libcontainer container a0bdde3adaec2e6f13e685c50e9672a2ca5745bb343eef3864b5665c9bec4855. Jan 30 13:53:56.656445 containerd[1455]: time="2025-01-30T13:53:56.656391476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588665988d-prpvz,Uid:dae3a7d6-38c6-4e8c-8aa4-c4156335f356,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7a70c52acd63af9436882b442644967218a492999394df034bd0e36a88ef87b\"" Jan 30 13:53:56.662512 containerd[1455]: time="2025-01-30T13:53:56.661697054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:53:56.663825 containerd[1455]: time="2025-01-30T13:53:56.663663758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g86lj,Uid:a96d4586-88b8-4432-b91b-9da478e7f363,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0bdde3adaec2e6f13e685c50e9672a2ca5745bb343eef3864b5665c9bec4855\"" Jan 30 13:53:57.608337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159651739.mount: Deactivated successfully. Jan 30 13:53:58.427368 kubelet[2606]: E0130 13:53:58.427311 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hs4d" podUID="4ce51f87-697e-49a4-af41-1b0a623704f3" Jan 30 13:53:58.453182 containerd[1455]: time="2025-01-30T13:53:58.453055140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:58.455372 containerd[1455]: time="2025-01-30T13:53:58.455250306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:53:58.457452 containerd[1455]: time="2025-01-30T13:53:58.457386326Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:58.462668 containerd[1455]: time="2025-01-30T13:53:58.462623368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:58.465255 containerd[1455]: time="2025-01-30T13:53:58.463767125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.802018291s" Jan 30 13:53:58.465255 containerd[1455]: time="2025-01-30T13:53:58.463846053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:53:58.465506 containerd[1455]: time="2025-01-30T13:53:58.465468580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:53:58.493354 containerd[1455]: time="2025-01-30T13:53:58.492439982Z" level=info msg="CreateContainer within sandbox \"b7a70c52acd63af9436882b442644967218a492999394df034bd0e36a88ef87b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:53:58.513439 containerd[1455]: time="2025-01-30T13:53:58.513387143Z" level=info msg="CreateContainer within sandbox \"b7a70c52acd63af9436882b442644967218a492999394df034bd0e36a88ef87b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"094d0589ec59037b83c6a629966eb70b544628ca4d3149fcec2200f783977824\"" Jan 30 13:53:58.514700 containerd[1455]: time="2025-01-30T13:53:58.514635487Z" level=info msg="StartContainer for \"094d0589ec59037b83c6a629966eb70b544628ca4d3149fcec2200f783977824\"" Jan 30 13:53:58.576327 systemd[1]: Started cri-containerd-094d0589ec59037b83c6a629966eb70b544628ca4d3149fcec2200f783977824.scope - libcontainer container 094d0589ec59037b83c6a629966eb70b544628ca4d3149fcec2200f783977824. Jan 30 13:53:58.641302 containerd[1455]: time="2025-01-30T13:53:58.641238596Z" level=info msg="StartContainer for \"094d0589ec59037b83c6a629966eb70b544628ca4d3149fcec2200f783977824\" returns successfully" Jan 30 13:53:59.387984 containerd[1455]: time="2025-01-30T13:53:59.387910628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:59.389164 containerd[1455]: time="2025-01-30T13:53:59.389068259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:53:59.390451 containerd[1455]: time="2025-01-30T13:53:59.390369654Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:59.396116 containerd[1455]: time="2025-01-30T13:53:59.394034214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:59.397740 containerd[1455]: time="2025-01-30T13:53:59.397688912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 932.173531ms" Jan 30 13:53:59.397859 containerd[1455]: time="2025-01-30T13:53:59.397755121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:53:59.402479 containerd[1455]: time="2025-01-30T13:53:59.402437942Z" level=info msg="CreateContainer within sandbox \"a0bdde3adaec2e6f13e685c50e9672a2ca5745bb343eef3864b5665c9bec4855\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:53:59.425115 containerd[1455]: time="2025-01-30T13:53:59.425025539Z" level=info msg="CreateContainer within sandbox \"a0bdde3adaec2e6f13e685c50e9672a2ca5745bb343eef3864b5665c9bec4855\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a67d83c03e24ec634c8852d1d729c49c7215fafbba0794707c2962a02a7b222a\"" Jan 30 13:53:59.426278 containerd[1455]: time="2025-01-30T13:53:59.426230197Z" level=info msg="StartContainer for \"a67d83c03e24ec634c8852d1d729c49c7215fafbba0794707c2962a02a7b222a\"" Jan 30 13:53:59.466316 systemd[1]: Started cri-containerd-a67d83c03e24ec634c8852d1d729c49c7215fafbba0794707c2962a02a7b222a.scope - libcontainer container a67d83c03e24ec634c8852d1d729c49c7215fafbba0794707c2962a02a7b222a. Jan 30 13:53:59.509495 containerd[1455]: time="2025-01-30T13:53:59.509409757Z" level=info msg="StartContainer for \"a67d83c03e24ec634c8852d1d729c49c7215fafbba0794707c2962a02a7b222a\" returns successfully" Jan 30 13:53:59.529480 systemd[1]: cri-containerd-a67d83c03e24ec634c8852d1d729c49c7215fafbba0794707c2962a02a7b222a.scope: Deactivated successfully. Jan 30 13:53:59.568964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a67d83c03e24ec634c8852d1d729c49c7215fafbba0794707c2962a02a7b222a-rootfs.mount: Deactivated successfully. Jan 30 13:53:59.597457 kubelet[2606]: I0130 13:53:59.597357 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-588665988d-prpvz" podStartSLOduration=2.792878859 podStartE2EDuration="4.597319403s" podCreationTimestamp="2025-01-30 13:53:55 +0000 UTC" firstStartedPulling="2025-01-30 13:53:56.66050601 +0000 UTC m=+23.404005126" lastFinishedPulling="2025-01-30 13:53:58.464946558 +0000 UTC m=+25.208445670" observedRunningTime="2025-01-30 13:53:59.59591577 +0000 UTC m=+26.339414899" watchObservedRunningTime="2025-01-30 13:53:59.597319403 +0000 UTC m=+26.340818534" Jan 30 13:54:00.144533 containerd[1455]: time="2025-01-30T13:54:00.144447659Z" level=info msg="shim disconnected" id=a67d83c03e24ec634c8852d1d729c49c7215fafbba0794707c2962a02a7b222a namespace=k8s.io Jan 30 13:54:00.144533 containerd[1455]: time="2025-01-30T13:54:00.144530016Z" level=warning msg="cleaning up after shim disconnected" id=a67d83c03e24ec634c8852d1d729c49c7215fafbba0794707c2962a02a7b222a namespace=k8s.io Jan 30 13:54:00.144533 containerd[1455]: time="2025-01-30T13:54:00.144543935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:00.427230 kubelet[2606]: E0130 13:54:00.426604 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hs4d" podUID="4ce51f87-697e-49a4-af41-1b0a623704f3" Jan 30 13:54:00.585578 kubelet[2606]: I0130 13:54:00.585520 2606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:00.587464 containerd[1455]: time="2025-01-30T13:54:00.587412487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:54:02.427902 kubelet[2606]: E0130 13:54:02.427832 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hs4d" podUID="4ce51f87-697e-49a4-af41-1b0a623704f3" Jan 30 13:54:04.426906 kubelet[2606]: E0130 13:54:04.426845 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hs4d" podUID="4ce51f87-697e-49a4-af41-1b0a623704f3" Jan 30 13:54:04.432006 containerd[1455]: time="2025-01-30T13:54:04.431950751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:04.433364 containerd[1455]: time="2025-01-30T13:54:04.433296819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:54:04.434734 containerd[1455]: time="2025-01-30T13:54:04.434450672Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:04.438020 containerd[1455]: time="2025-01-30T13:54:04.437951114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:04.439469 containerd[1455]: time="2025-01-30T13:54:04.439192463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.851721754s" Jan 30 13:54:04.439469 containerd[1455]: time="2025-01-30T13:54:04.439243790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:54:04.443117 containerd[1455]: time="2025-01-30T13:54:04.442721557Z" level=info msg="CreateContainer within sandbox \"a0bdde3adaec2e6f13e685c50e9672a2ca5745bb343eef3864b5665c9bec4855\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:54:04.467942 containerd[1455]: time="2025-01-30T13:54:04.467880568Z" level=info msg="CreateContainer within sandbox \"a0bdde3adaec2e6f13e685c50e9672a2ca5745bb343eef3864b5665c9bec4855\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d21ebdac99a24a3950552a17924f101aee6cb9e8f6b6cb812a4bff4b7e2c7205\"" Jan 30 13:54:04.469062 containerd[1455]: time="2025-01-30T13:54:04.468671502Z" level=info msg="StartContainer for \"d21ebdac99a24a3950552a17924f101aee6cb9e8f6b6cb812a4bff4b7e2c7205\"" Jan 30 13:54:04.513490 systemd[1]: Started cri-containerd-d21ebdac99a24a3950552a17924f101aee6cb9e8f6b6cb812a4bff4b7e2c7205.scope - libcontainer container d21ebdac99a24a3950552a17924f101aee6cb9e8f6b6cb812a4bff4b7e2c7205. Jan 30 13:54:04.555061 containerd[1455]: time="2025-01-30T13:54:04.554999800Z" level=info msg="StartContainer for \"d21ebdac99a24a3950552a17924f101aee6cb9e8f6b6cb812a4bff4b7e2c7205\" returns successfully" Jan 30 13:54:05.653171 systemd[1]: cri-containerd-d21ebdac99a24a3950552a17924f101aee6cb9e8f6b6cb812a4bff4b7e2c7205.scope: Deactivated successfully. Jan 30 13:54:05.670144 kubelet[2606]: I0130 13:54:05.668978 2606 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:54:05.691558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d21ebdac99a24a3950552a17924f101aee6cb9e8f6b6cb812a4bff4b7e2c7205-rootfs.mount: Deactivated successfully. Jan 30 13:54:05.719571 kubelet[2606]: I0130 13:54:05.719507 2606 topology_manager.go:215] "Topology Admit Handler" podUID="9f3c7838-002a-42e5-a748-e9ea78b103bd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v7465" Jan 30 13:54:05.723792 kubelet[2606]: I0130 13:54:05.723668 2606 topology_manager.go:215] "Topology Admit Handler" podUID="1df2e48e-0b26-4bea-a871-83ff0735e248" podNamespace="calico-system" podName="calico-kube-controllers-7c84f96c9b-fzvvr" Jan 30 13:54:05.735166 kubelet[2606]: I0130 13:54:05.734497 2606 topology_manager.go:215] "Topology Admit Handler" podUID="042a4ac1-a034-4cca-8cee-8de63b6b51bd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dtpgm" Jan 30 13:54:05.739549 kubelet[2606]: I0130 13:54:05.739149 2606 topology_manager.go:215] "Topology Admit Handler" podUID="abf0d867-27ff-456c-8a63-367b3c10edb1" podNamespace="calico-apiserver" podName="calico-apiserver-759d756c8b-9bswl" Jan 30 13:54:05.741190 kubelet[2606]: I0130 13:54:05.741049 2606 topology_manager.go:215] "Topology Admit Handler" podUID="5b9b7590-9585-4f18-ab1c-6fd1a8042bb6" podNamespace="calico-apiserver" podName="calico-apiserver-759d756c8b-wvvmt" Jan 30 13:54:05.745990 systemd[1]: Created slice kubepods-burstable-pod9f3c7838_002a_42e5_a748_e9ea78b103bd.slice - libcontainer container kubepods-burstable-pod9f3c7838_002a_42e5_a748_e9ea78b103bd.slice. Jan 30 13:54:05.762289 systemd[1]: Created slice kubepods-besteffort-pod1df2e48e_0b26_4bea_a871_83ff0735e248.slice - libcontainer container kubepods-besteffort-pod1df2e48e_0b26_4bea_a871_83ff0735e248.slice. Jan 30 13:54:05.774047 systemd[1]: Created slice kubepods-burstable-pod042a4ac1_a034_4cca_8cee_8de63b6b51bd.slice - libcontainer container kubepods-burstable-pod042a4ac1_a034_4cca_8cee_8de63b6b51bd.slice. Jan 30 13:54:05.778142 kubelet[2606]: I0130 13:54:05.775297 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx25k\" (UniqueName: \"kubernetes.io/projected/042a4ac1-a034-4cca-8cee-8de63b6b51bd-kube-api-access-gx25k\") pod \"coredns-7db6d8ff4d-dtpgm\" (UID: \"042a4ac1-a034-4cca-8cee-8de63b6b51bd\") " pod="kube-system/coredns-7db6d8ff4d-dtpgm" Jan 30 13:54:05.778142 kubelet[2606]: I0130 13:54:05.775347 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3c7838-002a-42e5-a748-e9ea78b103bd-config-volume\") pod \"coredns-7db6d8ff4d-v7465\" (UID: \"9f3c7838-002a-42e5-a748-e9ea78b103bd\") " pod="kube-system/coredns-7db6d8ff4d-v7465" Jan 30 13:54:05.778142 kubelet[2606]: I0130 13:54:05.775377 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdkl2\" (UniqueName: \"kubernetes.io/projected/9f3c7838-002a-42e5-a748-e9ea78b103bd-kube-api-access-sdkl2\") pod \"coredns-7db6d8ff4d-v7465\" (UID: \"9f3c7838-002a-42e5-a748-e9ea78b103bd\") " pod="kube-system/coredns-7db6d8ff4d-v7465" Jan 30 13:54:05.778142 kubelet[2606]: I0130 13:54:05.775409 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abf0d867-27ff-456c-8a63-367b3c10edb1-calico-apiserver-certs\") pod \"calico-apiserver-759d756c8b-9bswl\" (UID: \"abf0d867-27ff-456c-8a63-367b3c10edb1\") " pod="calico-apiserver/calico-apiserver-759d756c8b-9bswl" Jan 30 13:54:05.778142 kubelet[2606]: I0130 13:54:05.775442 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/042a4ac1-a034-4cca-8cee-8de63b6b51bd-config-volume\") pod \"coredns-7db6d8ff4d-dtpgm\" (UID: \"042a4ac1-a034-4cca-8cee-8de63b6b51bd\") " pod="kube-system/coredns-7db6d8ff4d-dtpgm" Jan 30 13:54:05.778644 kubelet[2606]: I0130 13:54:05.775486 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwhzv\" (UniqueName: \"kubernetes.io/projected/1df2e48e-0b26-4bea-a871-83ff0735e248-kube-api-access-wwhzv\") pod \"calico-kube-controllers-7c84f96c9b-fzvvr\" (UID: \"1df2e48e-0b26-4bea-a871-83ff0735e248\") " pod="calico-system/calico-kube-controllers-7c84f96c9b-fzvvr" Jan 30 13:54:05.778644 kubelet[2606]: I0130 13:54:05.775519 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1df2e48e-0b26-4bea-a871-83ff0735e248-tigera-ca-bundle\") pod \"calico-kube-controllers-7c84f96c9b-fzvvr\" (UID: \"1df2e48e-0b26-4bea-a871-83ff0735e248\") " pod="calico-system/calico-kube-controllers-7c84f96c9b-fzvvr" Jan 30 13:54:05.778644 kubelet[2606]: I0130 13:54:05.775551 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k8nl\" (UniqueName: \"kubernetes.io/projected/abf0d867-27ff-456c-8a63-367b3c10edb1-kube-api-access-7k8nl\") pod \"calico-apiserver-759d756c8b-9bswl\" (UID: \"abf0d867-27ff-456c-8a63-367b3c10edb1\") " pod="calico-apiserver/calico-apiserver-759d756c8b-9bswl" Jan 30 13:54:05.778644 kubelet[2606]: I0130 13:54:05.775582 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5b9b7590-9585-4f18-ab1c-6fd1a8042bb6-calico-apiserver-certs\") pod \"calico-apiserver-759d756c8b-wvvmt\" (UID: \"5b9b7590-9585-4f18-ab1c-6fd1a8042bb6\") " pod="calico-apiserver/calico-apiserver-759d756c8b-wvvmt" Jan 30 13:54:05.778644 kubelet[2606]: I0130 13:54:05.775617 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4w9\" (UniqueName: \"kubernetes.io/projected/5b9b7590-9585-4f18-ab1c-6fd1a8042bb6-kube-api-access-qp4w9\") pod \"calico-apiserver-759d756c8b-wvvmt\" (UID: \"5b9b7590-9585-4f18-ab1c-6fd1a8042bb6\") " pod="calico-apiserver/calico-apiserver-759d756c8b-wvvmt" Jan 30 13:54:05.787299 systemd[1]: Created slice kubepods-besteffort-podabf0d867_27ff_456c_8a63_367b3c10edb1.slice - libcontainer container kubepods-besteffort-podabf0d867_27ff_456c_8a63_367b3c10edb1.slice. Jan 30 13:54:05.801618 systemd[1]: Created slice kubepods-besteffort-pod5b9b7590_9585_4f18_ab1c_6fd1a8042bb6.slice - libcontainer container kubepods-besteffort-pod5b9b7590_9585_4f18_ab1c_6fd1a8042bb6.slice. Jan 30 13:54:06.060302 containerd[1455]: time="2025-01-30T13:54:06.059439938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7465,Uid:9f3c7838-002a-42e5-a748-e9ea78b103bd,Namespace:kube-system,Attempt:0,}" Jan 30 13:54:06.068576 containerd[1455]: time="2025-01-30T13:54:06.068509384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c84f96c9b-fzvvr,Uid:1df2e48e-0b26-4bea-a871-83ff0735e248,Namespace:calico-system,Attempt:0,}" Jan 30 13:54:06.081165 containerd[1455]: time="2025-01-30T13:54:06.081067659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dtpgm,Uid:042a4ac1-a034-4cca-8cee-8de63b6b51bd,Namespace:kube-system,Attempt:0,}" Jan 30 13:54:06.096586 containerd[1455]: time="2025-01-30T13:54:06.096510232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759d756c8b-9bswl,Uid:abf0d867-27ff-456c-8a63-367b3c10edb1,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:54:06.106063 containerd[1455]: time="2025-01-30T13:54:06.106010477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759d756c8b-wvvmt,Uid:5b9b7590-9585-4f18-ab1c-6fd1a8042bb6,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:54:06.434764 systemd[1]: Created slice kubepods-besteffort-pod4ce51f87_697e_49a4_af41_1b0a623704f3.slice - libcontainer container kubepods-besteffort-pod4ce51f87_697e_49a4_af41_1b0a623704f3.slice. Jan 30 13:54:06.438371 containerd[1455]: time="2025-01-30T13:54:06.438313994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hs4d,Uid:4ce51f87-697e-49a4-af41-1b0a623704f3,Namespace:calico-system,Attempt:0,}" Jan 30 13:54:06.452691 containerd[1455]: time="2025-01-30T13:54:06.452608800Z" level=info msg="shim disconnected" id=d21ebdac99a24a3950552a17924f101aee6cb9e8f6b6cb812a4bff4b7e2c7205 namespace=k8s.io Jan 30 13:54:06.452691 containerd[1455]: time="2025-01-30T13:54:06.452674735Z" level=warning msg="cleaning up after shim disconnected" id=d21ebdac99a24a3950552a17924f101aee6cb9e8f6b6cb812a4bff4b7e2c7205 namespace=k8s.io Jan 30 13:54:06.452691 containerd[1455]: time="2025-01-30T13:54:06.452690755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:06.622302 containerd[1455]: time="2025-01-30T13:54:06.621746465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:54:06.846516 containerd[1455]: time="2025-01-30T13:54:06.846446759Z" level=error msg="Failed to destroy network for sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.847542 containerd[1455]: time="2025-01-30T13:54:06.847388895Z" level=error msg="encountered an error cleaning up failed sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.847542 containerd[1455]: time="2025-01-30T13:54:06.847467811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c84f96c9b-fzvvr,Uid:1df2e48e-0b26-4bea-a871-83ff0735e248,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.852291 kubelet[2606]: E0130 13:54:06.850335 2606 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.852291 kubelet[2606]: E0130 13:54:06.850422 2606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c84f96c9b-fzvvr" Jan 30 13:54:06.852291 kubelet[2606]: E0130 13:54:06.850454 2606 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c84f96c9b-fzvvr" Jan 30 13:54:06.855354 kubelet[2606]: E0130 13:54:06.850511 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c84f96c9b-fzvvr_calico-system(1df2e48e-0b26-4bea-a871-83ff0735e248)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c84f96c9b-fzvvr_calico-system(1df2e48e-0b26-4bea-a871-83ff0735e248)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c84f96c9b-fzvvr" podUID="1df2e48e-0b26-4bea-a871-83ff0735e248" Jan 30 13:54:06.854014 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169-shm.mount: Deactivated successfully. Jan 30 13:54:06.890127 containerd[1455]: time="2025-01-30T13:54:06.889319077Z" level=error msg="Failed to destroy network for sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.890127 containerd[1455]: time="2025-01-30T13:54:06.889601212Z" level=error msg="Failed to destroy network for sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.898774 containerd[1455]: time="2025-01-30T13:54:06.897500431Z" level=error msg="encountered an error cleaning up failed sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.898774 containerd[1455]: time="2025-01-30T13:54:06.897603120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hs4d,Uid:4ce51f87-697e-49a4-af41-1b0a623704f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.900385 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52-shm.mount: Deactivated successfully. Jan 30 13:54:06.901670 kubelet[2606]: E0130 13:54:06.901402 2606 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.901670 kubelet[2606]: E0130 13:54:06.901477 2606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9hs4d" Jan 30 13:54:06.901670 kubelet[2606]: E0130 13:54:06.901511 2606 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9hs4d" Jan 30 13:54:06.901859 containerd[1455]: time="2025-01-30T13:54:06.901360852Z" level=error msg="Failed to destroy network for sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.900555 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba-shm.mount: Deactivated successfully. Jan 30 13:54:06.902036 kubelet[2606]: E0130 13:54:06.901569 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9hs4d_calico-system(4ce51f87-697e-49a4-af41-1b0a623704f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9hs4d_calico-system(4ce51f87-697e-49a4-af41-1b0a623704f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9hs4d" podUID="4ce51f87-697e-49a4-af41-1b0a623704f3" Jan 30 13:54:06.911034 kubelet[2606]: E0130 13:54:06.910775 2606 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.911034 kubelet[2606]: E0130 13:54:06.910840 2606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759d756c8b-9bswl" Jan 30 13:54:06.911034 kubelet[2606]: E0130 13:54:06.910874 2606 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759d756c8b-9bswl" Jan 30 13:54:06.912558 containerd[1455]: time="2025-01-30T13:54:06.910378452Z" level=error msg="encountered an error cleaning up failed sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.912558 containerd[1455]: time="2025-01-30T13:54:06.910460617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759d756c8b-9bswl,Uid:abf0d867-27ff-456c-8a63-367b3c10edb1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.912558 containerd[1455]: time="2025-01-30T13:54:06.911702126Z" level=error msg="Failed to destroy network for sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.912558 containerd[1455]: time="2025-01-30T13:54:06.912118235Z" level=error msg="encountered an error cleaning up failed sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.912558 containerd[1455]: time="2025-01-30T13:54:06.912180515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7465,Uid:9f3c7838-002a-42e5-a748-e9ea78b103bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.912889 kubelet[2606]: E0130 13:54:06.910943 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-759d756c8b-9bswl_calico-apiserver(abf0d867-27ff-456c-8a63-367b3c10edb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-759d756c8b-9bswl_calico-apiserver(abf0d867-27ff-456c-8a63-367b3c10edb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-759d756c8b-9bswl" podUID="abf0d867-27ff-456c-8a63-367b3c10edb1" Jan 30 13:54:06.912889 kubelet[2606]: E0130 13:54:06.912460 2606 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.912889 kubelet[2606]: E0130 13:54:06.912542 2606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-v7465" Jan 30 13:54:06.915018 kubelet[2606]: E0130 13:54:06.912572 2606 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-v7465" Jan 30 13:54:06.915018 kubelet[2606]: E0130 13:54:06.912660 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-v7465_kube-system(9f3c7838-002a-42e5-a748-e9ea78b103bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-v7465_kube-system(9f3c7838-002a-42e5-a748-e9ea78b103bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-v7465" podUID="9f3c7838-002a-42e5-a748-e9ea78b103bd" Jan 30 13:54:06.915018 kubelet[2606]: E0130 13:54:06.913188 2606 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.918067 containerd[1455]: time="2025-01-30T13:54:06.912880061Z" level=error msg="encountered an error cleaning up failed sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.918067 containerd[1455]: time="2025-01-30T13:54:06.912948490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dtpgm,Uid:042a4ac1-a034-4cca-8cee-8de63b6b51bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.918067 containerd[1455]: time="2025-01-30T13:54:06.915324307Z" level=error msg="Failed to destroy network for sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.918067 containerd[1455]: time="2025-01-30T13:54:06.916410586Z" level=error msg="encountered an error cleaning up failed sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.918067 containerd[1455]: time="2025-01-30T13:54:06.916476464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759d756c8b-wvvmt,Uid:5b9b7590-9585-4f18-ab1c-6fd1a8042bb6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.917560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d-shm.mount: Deactivated successfully. Jan 30 13:54:06.918501 kubelet[2606]: E0130 13:54:06.913233 2606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dtpgm" Jan 30 13:54:06.918501 kubelet[2606]: E0130 13:54:06.913262 2606 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dtpgm" Jan 30 13:54:06.918501 kubelet[2606]: E0130 13:54:06.913310 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dtpgm_kube-system(042a4ac1-a034-4cca-8cee-8de63b6b51bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dtpgm_kube-system(042a4ac1-a034-4cca-8cee-8de63b6b51bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dtpgm" podUID="042a4ac1-a034-4cca-8cee-8de63b6b51bd" Jan 30 13:54:06.918694 kubelet[2606]: E0130 13:54:06.916647 2606 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:06.918694 kubelet[2606]: E0130 13:54:06.916724 2606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759d756c8b-wvvmt" Jan 30 13:54:06.918694 kubelet[2606]: E0130 13:54:06.916787 2606 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759d756c8b-wvvmt" Jan 30 13:54:06.918846 kubelet[2606]: E0130 13:54:06.916884 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-759d756c8b-wvvmt_calico-apiserver(5b9b7590-9585-4f18-ab1c-6fd1a8042bb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-759d756c8b-wvvmt_calico-apiserver(5b9b7590-9585-4f18-ab1c-6fd1a8042bb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-759d756c8b-wvvmt" podUID="5b9b7590-9585-4f18-ab1c-6fd1a8042bb6" Jan 30 13:54:06.927966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2-shm.mount: Deactivated successfully. Jan 30 13:54:06.928144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245-shm.mount: Deactivated successfully. Jan 30 13:54:07.622925 kubelet[2606]: I0130 13:54:07.622890 2606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:07.628253 containerd[1455]: time="2025-01-30T13:54:07.626581056Z" level=info msg="StopPodSandbox for \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\"" Jan 30 13:54:07.628253 containerd[1455]: time="2025-01-30T13:54:07.627723933Z" level=info msg="Ensure that sandbox 48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52 in task-service has been cleanup successfully" Jan 30 13:54:07.637949 kubelet[2606]: I0130 13:54:07.637883 2606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:07.646136 containerd[1455]: time="2025-01-30T13:54:07.645422286Z" level=info msg="StopPodSandbox for \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\"" Jan 30 13:54:07.646136 containerd[1455]: time="2025-01-30T13:54:07.645998834Z" level=info msg="Ensure that sandbox 0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2 in task-service has been cleanup successfully" Jan 30 13:54:07.663712 kubelet[2606]: I0130 13:54:07.663681 2606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:07.673174 containerd[1455]: time="2025-01-30T13:54:07.673126635Z" level=info msg="StopPodSandbox for \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\"" Jan 30 13:54:07.673953 containerd[1455]: time="2025-01-30T13:54:07.673901568Z" level=info msg="Ensure that sandbox 7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d in task-service has been cleanup successfully" Jan 30 13:54:07.724967 kubelet[2606]: I0130 13:54:07.723401 2606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:07.726676 containerd[1455]: time="2025-01-30T13:54:07.726629876Z" level=info msg="StopPodSandbox for \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\"" Jan 30 13:54:07.728757 containerd[1455]: time="2025-01-30T13:54:07.728722582Z" level=info msg="Ensure that sandbox 11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba in task-service has been cleanup successfully" Jan 30 13:54:07.731515 kubelet[2606]: I0130 13:54:07.731486 2606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:07.735223 containerd[1455]: time="2025-01-30T13:54:07.735152312Z" level=info msg="StopPodSandbox for \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\"" Jan 30 13:54:07.735563 containerd[1455]: time="2025-01-30T13:54:07.735531997Z" level=info msg="Ensure that sandbox 041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169 in task-service has been cleanup successfully" Jan 30 13:54:07.741614 kubelet[2606]: I0130 13:54:07.741518 2606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:07.742871 containerd[1455]: time="2025-01-30T13:54:07.742310045Z" level=info msg="StopPodSandbox for \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\"" Jan 30 13:54:07.742871 containerd[1455]: time="2025-01-30T13:54:07.742541730Z" level=info msg="Ensure that sandbox 53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245 in task-service has been cleanup successfully" Jan 30 13:54:07.767228 containerd[1455]: time="2025-01-30T13:54:07.767160927Z" level=error msg="StopPodSandbox for \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\" failed" error="failed to destroy network for sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:07.767650 kubelet[2606]: E0130 13:54:07.767599 2606 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:07.767779 kubelet[2606]: E0130 13:54:07.767681 2606 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2"} Jan 30 13:54:07.767840 kubelet[2606]: E0130 13:54:07.767789 2606 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b9b7590-9585-4f18-ab1c-6fd1a8042bb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:07.767963 kubelet[2606]: E0130 13:54:07.767827 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b9b7590-9585-4f18-ab1c-6fd1a8042bb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-759d756c8b-wvvmt" podUID="5b9b7590-9585-4f18-ab1c-6fd1a8042bb6" Jan 30 13:54:07.820204 containerd[1455]: time="2025-01-30T13:54:07.819832116Z" level=error msg="StopPodSandbox for \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\" failed" error="failed to destroy network for sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:07.821132 kubelet[2606]: E0130 13:54:07.820518 2606 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:07.821132 kubelet[2606]: E0130 13:54:07.820585 2606 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52"} Jan 30 13:54:07.821132 kubelet[2606]: E0130 13:54:07.820641 2606 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4ce51f87-697e-49a4-af41-1b0a623704f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:07.821132 kubelet[2606]: E0130 13:54:07.820678 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4ce51f87-697e-49a4-af41-1b0a623704f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9hs4d" podUID="4ce51f87-697e-49a4-af41-1b0a623704f3" Jan 30 13:54:07.853340 containerd[1455]: time="2025-01-30T13:54:07.853268429Z" level=error msg="StopPodSandbox for \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\" failed" error="failed to destroy network for sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:07.853710 kubelet[2606]: E0130 13:54:07.853636 2606 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:07.854378 kubelet[2606]: E0130 13:54:07.853707 2606 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d"} Jan 30 13:54:07.854378 kubelet[2606]: E0130 13:54:07.853755 2606 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abf0d867-27ff-456c-8a63-367b3c10edb1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:07.854378 kubelet[2606]: E0130 13:54:07.853790 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abf0d867-27ff-456c-8a63-367b3c10edb1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-759d756c8b-9bswl" podUID="abf0d867-27ff-456c-8a63-367b3c10edb1" Jan 30 13:54:07.872311 containerd[1455]: time="2025-01-30T13:54:07.871808663Z" level=error msg="StopPodSandbox for \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\" failed" error="failed to destroy network for sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:07.872490 kubelet[2606]: E0130 13:54:07.872143 2606 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:07.872490 kubelet[2606]: E0130 13:54:07.872203 2606 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba"} Jan 30 13:54:07.872490 kubelet[2606]: E0130 13:54:07.872252 2606 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"042a4ac1-a034-4cca-8cee-8de63b6b51bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:07.872490 kubelet[2606]: E0130 13:54:07.872290 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"042a4ac1-a034-4cca-8cee-8de63b6b51bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dtpgm" podUID="042a4ac1-a034-4cca-8cee-8de63b6b51bd" Jan 30 13:54:07.883301 containerd[1455]: time="2025-01-30T13:54:07.882713053Z" level=error msg="StopPodSandbox for \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\" failed" error="failed to destroy network for sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:07.883656 kubelet[2606]: E0130 13:54:07.882987 2606 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:07.883656 kubelet[2606]: E0130 13:54:07.883048 2606 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169"} Jan 30 13:54:07.883656 kubelet[2606]: E0130 13:54:07.883133 2606 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1df2e48e-0b26-4bea-a871-83ff0735e248\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:07.884803 kubelet[2606]: E0130 13:54:07.883170 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1df2e48e-0b26-4bea-a871-83ff0735e248\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c84f96c9b-fzvvr" podUID="1df2e48e-0b26-4bea-a871-83ff0735e248" Jan 30 13:54:07.894762 containerd[1455]: time="2025-01-30T13:54:07.894703511Z" level=error msg="StopPodSandbox for \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\" failed" error="failed to destroy network for sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:07.895036 kubelet[2606]: E0130 13:54:07.894992 2606 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:07.895162 kubelet[2606]: E0130 13:54:07.895056 2606 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245"} Jan 30 13:54:07.895162 kubelet[2606]: E0130 13:54:07.895144 2606 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f3c7838-002a-42e5-a748-e9ea78b103bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:07.895316 kubelet[2606]: E0130 13:54:07.895179 2606 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f3c7838-002a-42e5-a748-e9ea78b103bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-v7465" podUID="9f3c7838-002a-42e5-a748-e9ea78b103bd" Jan 30 13:54:13.221620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1185138125.mount: Deactivated successfully. Jan 30 13:54:13.262758 containerd[1455]: time="2025-01-30T13:54:13.262289913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:13.264314 containerd[1455]: time="2025-01-30T13:54:13.264243189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:54:13.265673 containerd[1455]: time="2025-01-30T13:54:13.265608312Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:13.268686 containerd[1455]: time="2025-01-30T13:54:13.268616935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:13.269868 containerd[1455]: time="2025-01-30T13:54:13.269664759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.647856871s" Jan 30 13:54:13.269868 containerd[1455]: time="2025-01-30T13:54:13.269727604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:54:13.292851 containerd[1455]: time="2025-01-30T13:54:13.292156885Z" level=info msg="CreateContainer within sandbox \"a0bdde3adaec2e6f13e685c50e9672a2ca5745bb343eef3864b5665c9bec4855\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:54:13.321883 containerd[1455]: time="2025-01-30T13:54:13.321825660Z" level=info msg="CreateContainer within sandbox \"a0bdde3adaec2e6f13e685c50e9672a2ca5745bb343eef3864b5665c9bec4855\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1c8240ccd9dfe69b551d9c1f21ff8bd5f39d8e3c6a386f9dcd8130c0a6f7f3e7\"" Jan 30 13:54:13.325706 containerd[1455]: time="2025-01-30T13:54:13.323788238Z" level=info msg="StartContainer for \"1c8240ccd9dfe69b551d9c1f21ff8bd5f39d8e3c6a386f9dcd8130c0a6f7f3e7\"" Jan 30 13:54:13.325410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4090351966.mount: Deactivated successfully. Jan 30 13:54:13.369330 systemd[1]: Started cri-containerd-1c8240ccd9dfe69b551d9c1f21ff8bd5f39d8e3c6a386f9dcd8130c0a6f7f3e7.scope - libcontainer container 1c8240ccd9dfe69b551d9c1f21ff8bd5f39d8e3c6a386f9dcd8130c0a6f7f3e7. Jan 30 13:54:13.409278 containerd[1455]: time="2025-01-30T13:54:13.409224528Z" level=info msg="StartContainer for \"1c8240ccd9dfe69b551d9c1f21ff8bd5f39d8e3c6a386f9dcd8130c0a6f7f3e7\" returns successfully" Jan 30 13:54:13.515905 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:54:13.516201 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:54:17.810548 kubelet[2606]: I0130 13:54:17.810416 2606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:18.428393 containerd[1455]: time="2025-01-30T13:54:18.428236878Z" level=info msg="StopPodSandbox for \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\"" Jan 30 13:54:18.539394 kubelet[2606]: I0130 13:54:18.537537 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g86lj" podStartSLOduration=5.932204789 podStartE2EDuration="22.537513526s" podCreationTimestamp="2025-01-30 13:53:56 +0000 UTC" firstStartedPulling="2025-01-30 13:53:56.665696111 +0000 UTC m=+23.409195218" lastFinishedPulling="2025-01-30 13:54:13.27100483 +0000 UTC m=+40.014503955" observedRunningTime="2025-01-30 13:54:13.786482252 +0000 UTC m=+40.529981378" watchObservedRunningTime="2025-01-30 13:54:18.537513526 +0000 UTC m=+45.281012658" Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.539 [INFO][3901] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.539 [INFO][3901] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" iface="eth0" netns="/var/run/netns/cni-d3da6d85-a407-8161-cfd7-31191031dfd5" Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.540 [INFO][3901] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" iface="eth0" netns="/var/run/netns/cni-d3da6d85-a407-8161-cfd7-31191031dfd5" Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.540 [INFO][3901] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" iface="eth0" netns="/var/run/netns/cni-d3da6d85-a407-8161-cfd7-31191031dfd5" Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.540 [INFO][3901] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.540 [INFO][3901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.574 [INFO][3912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" HandleID="k8s-pod-network.0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.574 [INFO][3912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.574 [INFO][3912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.581 [WARNING][3912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" HandleID="k8s-pod-network.0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.581 [INFO][3912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" HandleID="k8s-pod-network.0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.583 [INFO][3912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:18.587907 containerd[1455]: 2025-01-30 13:54:18.586 [INFO][3901] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:18.589340 containerd[1455]: time="2025-01-30T13:54:18.588238946Z" level=info msg="TearDown network for sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\" successfully" Jan 30 13:54:18.589340 containerd[1455]: time="2025-01-30T13:54:18.588278167Z" level=info msg="StopPodSandbox for \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\" returns successfully" Jan 30 13:54:18.590995 containerd[1455]: time="2025-01-30T13:54:18.590936353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759d756c8b-wvvmt,Uid:5b9b7590-9585-4f18-ab1c-6fd1a8042bb6,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:54:18.595452 systemd[1]: run-netns-cni\x2dd3da6d85\x2da407\x2d8161\x2dcfd7\x2d31191031dfd5.mount: Deactivated successfully. Jan 30 13:54:18.761251 systemd-networkd[1367]: calia67f28586be: Link UP Jan 30 13:54:18.762023 systemd-networkd[1367]: calia67f28586be: Gained carrier Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.645 [INFO][3918] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.661 [INFO][3918] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0 calico-apiserver-759d756c8b- calico-apiserver 5b9b7590-9585-4f18-ab1c-6fd1a8042bb6 791 0 2025-01-30 13:53:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:759d756c8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal calico-apiserver-759d756c8b-wvvmt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia67f28586be [] []}} ContainerID="7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-wvvmt" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.661 [INFO][3918] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-wvvmt" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.697 [INFO][3929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" HandleID="k8s-pod-network.7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.712 [INFO][3929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" HandleID="k8s-pod-network.7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", "pod":"calico-apiserver-759d756c8b-wvvmt", "timestamp":"2025-01-30 13:54:18.697747338 +0000 UTC"}, Hostname:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.712 [INFO][3929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.712 [INFO][3929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.713 [INFO][3929] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal' Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.715 [INFO][3929] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.721 [INFO][3929] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.726 [INFO][3929] ipam/ipam.go 489: Trying affinity for 192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.729 [INFO][3929] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.732 [INFO][3929] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.732 [INFO][3929] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.733 [INFO][3929] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.739 [INFO][3929] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.747 [INFO][3929] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.1/26] block=192.168.80.0/26 handle="k8s-pod-network.7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.747 [INFO][3929] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.1/26] handle="k8s-pod-network.7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.747 [INFO][3929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:18.788358 containerd[1455]: 2025-01-30 13:54:18.747 [INFO][3929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.1/26] IPv6=[] ContainerID="7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" HandleID="k8s-pod-network.7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:18.791721 containerd[1455]: 2025-01-30 13:54:18.749 [INFO][3918] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-wvvmt" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0", GenerateName:"calico-apiserver-759d756c8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b9b7590-9585-4f18-ab1c-6fd1a8042bb6", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759d756c8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-759d756c8b-wvvmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia67f28586be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:18.791721 containerd[1455]: 2025-01-30 13:54:18.750 [INFO][3918] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.1/32] ContainerID="7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-wvvmt" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:18.791721 containerd[1455]: 2025-01-30 13:54:18.750 [INFO][3918] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia67f28586be ContainerID="7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-wvvmt" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:18.791721 containerd[1455]: 2025-01-30 13:54:18.764 [INFO][3918] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-wvvmt" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:18.791721 containerd[1455]: 2025-01-30 13:54:18.765 [INFO][3918] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-wvvmt" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0", GenerateName:"calico-apiserver-759d756c8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b9b7590-9585-4f18-ab1c-6fd1a8042bb6", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759d756c8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce", Pod:"calico-apiserver-759d756c8b-wvvmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia67f28586be", MAC:"1a:ea:ab:97:81:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:18.791721 containerd[1455]: 2025-01-30 13:54:18.782 [INFO][3918] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-wvvmt" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:18.823723 containerd[1455]: time="2025-01-30T13:54:18.823211910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:18.823723 containerd[1455]: time="2025-01-30T13:54:18.823299691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:18.823723 containerd[1455]: time="2025-01-30T13:54:18.823327532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:18.823723 containerd[1455]: time="2025-01-30T13:54:18.823457447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:18.862301 systemd[1]: Started cri-containerd-7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce.scope - libcontainer container 7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce. Jan 30 13:54:18.915546 containerd[1455]: time="2025-01-30T13:54:18.915370011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759d756c8b-wvvmt,Uid:5b9b7590-9585-4f18-ab1c-6fd1a8042bb6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce\"" Jan 30 13:54:18.918340 containerd[1455]: time="2025-01-30T13:54:18.918299569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:54:19.430222 containerd[1455]: time="2025-01-30T13:54:19.429807573Z" level=info msg="StopPodSandbox for \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\"" Jan 30 13:54:19.433762 containerd[1455]: time="2025-01-30T13:54:19.433647021Z" level=info msg="StopPodSandbox for \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\"" Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.557 [INFO][4012] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.558 [INFO][4012] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" iface="eth0" netns="/var/run/netns/cni-616d7fbe-291d-42ef-e631-8520448800ee" Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.559 [INFO][4012] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" iface="eth0" netns="/var/run/netns/cni-616d7fbe-291d-42ef-e631-8520448800ee" Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.560 [INFO][4012] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" iface="eth0" netns="/var/run/netns/cni-616d7fbe-291d-42ef-e631-8520448800ee" Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.560 [INFO][4012] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.560 [INFO][4012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.630 [INFO][4038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" HandleID="k8s-pod-network.11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.631 [INFO][4038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.631 [INFO][4038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.642 [WARNING][4038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" HandleID="k8s-pod-network.11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.642 [INFO][4038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" HandleID="k8s-pod-network.11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.645 [INFO][4038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:19.652018 containerd[1455]: 2025-01-30 13:54:19.648 [INFO][4012] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:19.655885 containerd[1455]: time="2025-01-30T13:54:19.653660872Z" level=info msg="TearDown network for sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\" successfully" Jan 30 13:54:19.655885 containerd[1455]: time="2025-01-30T13:54:19.653735175Z" level=info msg="StopPodSandbox for \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\" returns successfully" Jan 30 13:54:19.659413 containerd[1455]: time="2025-01-30T13:54:19.657448683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dtpgm,Uid:042a4ac1-a034-4cca-8cee-8de63b6b51bd,Namespace:kube-system,Attempt:1,}" Jan 30 13:54:19.660362 systemd[1]: run-netns-cni\x2d616d7fbe\x2d291d\x2d42ef\x2de631\x2d8520448800ee.mount: Deactivated successfully. Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.539 [INFO][4020] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.540 [INFO][4020] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" iface="eth0" netns="/var/run/netns/cni-1a76d705-6319-7310-c78b-280001bfc352" Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.544 [INFO][4020] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" iface="eth0" netns="/var/run/netns/cni-1a76d705-6319-7310-c78b-280001bfc352" Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.546 [INFO][4020] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" iface="eth0" netns="/var/run/netns/cni-1a76d705-6319-7310-c78b-280001bfc352" Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.546 [INFO][4020] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.546 [INFO][4020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.658 [INFO][4034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" HandleID="k8s-pod-network.041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.661 [INFO][4034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.661 [INFO][4034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.675 [WARNING][4034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" HandleID="k8s-pod-network.041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.675 [INFO][4034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" HandleID="k8s-pod-network.041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.683 [INFO][4034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:19.707203 containerd[1455]: 2025-01-30 13:54:19.691 [INFO][4020] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:19.714583 containerd[1455]: time="2025-01-30T13:54:19.711608679Z" level=info msg="TearDown network for sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\" successfully" Jan 30 13:54:19.714583 containerd[1455]: time="2025-01-30T13:54:19.711664711Z" level=info msg="StopPodSandbox for \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\" returns successfully" Jan 30 13:54:19.716340 containerd[1455]: time="2025-01-30T13:54:19.716265434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c84f96c9b-fzvvr,Uid:1df2e48e-0b26-4bea-a871-83ff0735e248,Namespace:calico-system,Attempt:1,}" Jan 30 13:54:19.844248 systemd[1]: run-netns-cni\x2d1a76d705\x2d6319\x2d7310\x2dc78b\x2d280001bfc352.mount: Deactivated successfully. Jan 30 13:54:20.031986 systemd-networkd[1367]: calif17d5893b21: Link UP Jan 30 13:54:20.032434 systemd-networkd[1367]: calif17d5893b21: Gained carrier Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.774 [INFO][4060] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.801 [INFO][4060] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0 coredns-7db6d8ff4d- kube-system 042a4ac1-a034-4cca-8cee-8de63b6b51bd 801 0 2025-01-30 13:53:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal coredns-7db6d8ff4d-dtpgm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif17d5893b21 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dtpgm" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.801 [INFO][4060] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dtpgm" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.932 [INFO][4082] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" HandleID="k8s-pod-network.1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.965 [INFO][4082] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" HandleID="k8s-pod-network.1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000257a00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-dtpgm", "timestamp":"2025-01-30 13:54:19.93284796 +0000 UTC"}, Hostname:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.965 [INFO][4082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.966 [INFO][4082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.966 [INFO][4082] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal' Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.969 [INFO][4082] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.978 [INFO][4082] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.986 [INFO][4082] ipam/ipam.go 489: Trying affinity for 192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.989 [INFO][4082] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.994 [INFO][4082] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.995 [INFO][4082] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:19.997 [INFO][4082] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669 Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:20.006 [INFO][4082] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:20.018 [INFO][4082] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.2/26] block=192.168.80.0/26 handle="k8s-pod-network.1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:20.019 [INFO][4082] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.2/26] handle="k8s-pod-network.1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:20.019 [INFO][4082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:20.089753 containerd[1455]: 2025-01-30 13:54:20.019 [INFO][4082] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.2/26] IPv6=[] ContainerID="1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" HandleID="k8s-pod-network.1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:20.091986 containerd[1455]: 2025-01-30 13:54:20.022 [INFO][4060] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dtpgm" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"042a4ac1-a034-4cca-8cee-8de63b6b51bd", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-dtpgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif17d5893b21", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:20.091986 containerd[1455]: 2025-01-30 13:54:20.023 [INFO][4060] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.2/32] ContainerID="1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dtpgm" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:20.091986 containerd[1455]: 2025-01-30 13:54:20.023 [INFO][4060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif17d5893b21 ContainerID="1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dtpgm" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:20.091986 containerd[1455]: 2025-01-30 13:54:20.029 [INFO][4060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dtpgm" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:20.091986 containerd[1455]: 2025-01-30 13:54:20.030 [INFO][4060] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dtpgm" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"042a4ac1-a034-4cca-8cee-8de63b6b51bd", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669", Pod:"coredns-7db6d8ff4d-dtpgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif17d5893b21", MAC:"36:45:e7:20:0d:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:20.091986 containerd[1455]: 2025-01-30 13:54:20.082 [INFO][4060] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dtpgm" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:20.169551 systemd-networkd[1367]: caliedd4df1a695: Link UP Jan 30 13:54:20.170636 systemd-networkd[1367]: caliedd4df1a695: Gained carrier Jan 30 13:54:20.175658 containerd[1455]: time="2025-01-30T13:54:20.175487238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:20.176140 containerd[1455]: time="2025-01-30T13:54:20.175574757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:20.176140 containerd[1455]: time="2025-01-30T13:54:20.175593739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:20.176140 containerd[1455]: time="2025-01-30T13:54:20.175716794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:19.890 [INFO][4069] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:19.919 [INFO][4069] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0 calico-kube-controllers-7c84f96c9b- calico-system 1df2e48e-0b26-4bea-a871-83ff0735e248 800 0 2025-01-30 13:53:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c84f96c9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal calico-kube-controllers-7c84f96c9b-fzvvr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliedd4df1a695 [] []}} ContainerID="0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" Namespace="calico-system" Pod="calico-kube-controllers-7c84f96c9b-fzvvr" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:19.919 [INFO][4069] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" Namespace="calico-system" Pod="calico-kube-controllers-7c84f96c9b-fzvvr" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.028 [INFO][4094] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" HandleID="k8s-pod-network.0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.085 [INFO][4094] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" HandleID="k8s-pod-network.0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003195f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", "pod":"calico-kube-controllers-7c84f96c9b-fzvvr", "timestamp":"2025-01-30 13:54:20.028526083 +0000 UTC"}, Hostname:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.086 [INFO][4094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.086 [INFO][4094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.087 [INFO][4094] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal' Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.091 [INFO][4094] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.099 [INFO][4094] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.108 [INFO][4094] ipam/ipam.go 489: Trying affinity for 192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.114 [INFO][4094] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.120 [INFO][4094] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.120 [INFO][4094] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.123 [INFO][4094] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533 Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.134 [INFO][4094] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.154 [INFO][4094] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.3/26] block=192.168.80.0/26 handle="k8s-pod-network.0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.155 [INFO][4094] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.3/26] handle="k8s-pod-network.0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.155 [INFO][4094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:20.216589 containerd[1455]: 2025-01-30 13:54:20.155 [INFO][4094] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.3/26] IPv6=[] ContainerID="0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" HandleID="k8s-pod-network.0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:20.218004 containerd[1455]: 2025-01-30 13:54:20.162 [INFO][4069] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" Namespace="calico-system" Pod="calico-kube-controllers-7c84f96c9b-fzvvr" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0", GenerateName:"calico-kube-controllers-7c84f96c9b-", Namespace:"calico-system", SelfLink:"", UID:"1df2e48e-0b26-4bea-a871-83ff0735e248", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c84f96c9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-7c84f96c9b-fzvvr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedd4df1a695", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:20.218004 containerd[1455]: 2025-01-30 13:54:20.162 [INFO][4069] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.3/32] ContainerID="0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" Namespace="calico-system" Pod="calico-kube-controllers-7c84f96c9b-fzvvr" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:20.218004 containerd[1455]: 2025-01-30 13:54:20.162 [INFO][4069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedd4df1a695 ContainerID="0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" Namespace="calico-system" Pod="calico-kube-controllers-7c84f96c9b-fzvvr" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:20.218004 containerd[1455]: 2025-01-30 13:54:20.168 [INFO][4069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" Namespace="calico-system" Pod="calico-kube-controllers-7c84f96c9b-fzvvr" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:20.218004 containerd[1455]: 2025-01-30 13:54:20.172 [INFO][4069] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" Namespace="calico-system" Pod="calico-kube-controllers-7c84f96c9b-fzvvr" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0", GenerateName:"calico-kube-controllers-7c84f96c9b-", Namespace:"calico-system", SelfLink:"", UID:"1df2e48e-0b26-4bea-a871-83ff0735e248", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c84f96c9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533", Pod:"calico-kube-controllers-7c84f96c9b-fzvvr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedd4df1a695", MAC:"b6:35:94:1a:b1:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:20.218004 containerd[1455]: 2025-01-30 13:54:20.211 [INFO][4069] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533" Namespace="calico-system" Pod="calico-kube-controllers-7c84f96c9b-fzvvr" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:20.241202 systemd[1]: run-containerd-runc-k8s.io-1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669-runc.aBJ1LZ.mount: Deactivated successfully. Jan 30 13:54:20.251321 systemd[1]: Started cri-containerd-1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669.scope - libcontainer container 1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669. Jan 30 13:54:20.312386 containerd[1455]: time="2025-01-30T13:54:20.312229696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:20.312885 containerd[1455]: time="2025-01-30T13:54:20.312331804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:20.312885 containerd[1455]: time="2025-01-30T13:54:20.312359131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:20.312885 containerd[1455]: time="2025-01-30T13:54:20.312497479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:20.360262 systemd-networkd[1367]: calia67f28586be: Gained IPv6LL Jan 30 13:54:20.384387 systemd[1]: Started cri-containerd-0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533.scope - libcontainer container 0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533. Jan 30 13:54:20.397209 containerd[1455]: time="2025-01-30T13:54:20.397147877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dtpgm,Uid:042a4ac1-a034-4cca-8cee-8de63b6b51bd,Namespace:kube-system,Attempt:1,} returns sandbox id \"1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669\"" Jan 30 13:54:20.402371 containerd[1455]: time="2025-01-30T13:54:20.402310685Z" level=info msg="CreateContainer within sandbox \"1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:54:20.443824 containerd[1455]: time="2025-01-30T13:54:20.443268351Z" level=info msg="StopPodSandbox for \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\"" Jan 30 13:54:20.471858 containerd[1455]: time="2025-01-30T13:54:20.470225221Z" level=info msg="CreateContainer within sandbox \"1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b513a14165f6c47e6717e636389bff06913d243fc4a5cd79c20b96cc89a9936\"" Jan 30 13:54:20.473831 containerd[1455]: time="2025-01-30T13:54:20.473792865Z" level=info msg="StartContainer for \"8b513a14165f6c47e6717e636389bff06913d243fc4a5cd79c20b96cc89a9936\"" Jan 30 13:54:20.503522 containerd[1455]: time="2025-01-30T13:54:20.503427035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c84f96c9b-fzvvr,Uid:1df2e48e-0b26-4bea-a871-83ff0735e248,Namespace:calico-system,Attempt:1,} returns sandbox id \"0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533\"" Jan 30 13:54:20.532610 systemd[1]: Started cri-containerd-8b513a14165f6c47e6717e636389bff06913d243fc4a5cd79c20b96cc89a9936.scope - libcontainer container 8b513a14165f6c47e6717e636389bff06913d243fc4a5cd79c20b96cc89a9936. Jan 30 13:54:20.602801 containerd[1455]: time="2025-01-30T13:54:20.601048997Z" level=info msg="StartContainer for \"8b513a14165f6c47e6717e636389bff06913d243fc4a5cd79c20b96cc89a9936\" returns successfully" Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.587 [INFO][4210] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.588 [INFO][4210] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" iface="eth0" netns="/var/run/netns/cni-64cb86f9-4c44-f386-05dc-56b11dd14933" Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.589 [INFO][4210] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" iface="eth0" netns="/var/run/netns/cni-64cb86f9-4c44-f386-05dc-56b11dd14933" Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.589 [INFO][4210] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" iface="eth0" netns="/var/run/netns/cni-64cb86f9-4c44-f386-05dc-56b11dd14933" Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.589 [INFO][4210] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.589 [INFO][4210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.717 [INFO][4258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" HandleID="k8s-pod-network.7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.718 [INFO][4258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.718 [INFO][4258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.734 [WARNING][4258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" HandleID="k8s-pod-network.7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.734 [INFO][4258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" HandleID="k8s-pod-network.7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.738 [INFO][4258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:20.747910 containerd[1455]: 2025-01-30 13:54:20.743 [INFO][4210] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:20.749753 containerd[1455]: time="2025-01-30T13:54:20.749509860Z" level=info msg="TearDown network for sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\" successfully" Jan 30 13:54:20.749753 containerd[1455]: time="2025-01-30T13:54:20.749576814Z" level=info msg="StopPodSandbox for \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\" returns successfully" Jan 30 13:54:20.751708 containerd[1455]: time="2025-01-30T13:54:20.751667388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759d756c8b-9bswl,Uid:abf0d867-27ff-456c-8a63-367b3c10edb1,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:54:20.827226 kubelet[2606]: I0130 13:54:20.826585 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dtpgm" podStartSLOduration=32.826557041 podStartE2EDuration="32.826557041s" podCreationTimestamp="2025-01-30 13:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:54:20.825564888 +0000 UTC m=+47.569064017" watchObservedRunningTime="2025-01-30 13:54:20.826557041 +0000 UTC m=+47.570056165" Jan 30 13:54:20.846939 systemd[1]: run-netns-cni\x2d64cb86f9\x2d4c44\x2df386\x2d05dc\x2d56b11dd14933.mount: Deactivated successfully. Jan 30 13:54:20.877350 kubelet[2606]: I0130 13:54:20.876839 2606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:21.326622 systemd-networkd[1367]: cali71d3ddf9f3e: Link UP Jan 30 13:54:21.327872 systemd-networkd[1367]: cali71d3ddf9f3e: Gained carrier Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:20.919 [INFO][4278] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:20.975 [INFO][4278] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0 calico-apiserver-759d756c8b- calico-apiserver abf0d867-27ff-456c-8a63-367b3c10edb1 813 0 2025-01-30 13:53:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:759d756c8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal calico-apiserver-759d756c8b-9bswl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali71d3ddf9f3e [] []}} ContainerID="4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-9bswl" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:20.976 [INFO][4278] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-9bswl" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.106 [INFO][4296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" HandleID="k8s-pod-network.4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.139 [INFO][4296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" HandleID="k8s-pod-network.4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", "pod":"calico-apiserver-759d756c8b-9bswl", "timestamp":"2025-01-30 13:54:21.106926508 +0000 UTC"}, Hostname:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.139 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.139 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.139 [INFO][4296] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal' Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.143 [INFO][4296] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.172 [INFO][4296] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.193 [INFO][4296] ipam/ipam.go 489: Trying affinity for 192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.198 [INFO][4296] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.221 [INFO][4296] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.223 [INFO][4296] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.237 [INFO][4296] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.256 [INFO][4296] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.310 [INFO][4296] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.4/26] block=192.168.80.0/26 handle="k8s-pod-network.4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.311 [INFO][4296] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.4/26] handle="k8s-pod-network.4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.311 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:21.384503 containerd[1455]: 2025-01-30 13:54:21.311 [INFO][4296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.4/26] IPv6=[] ContainerID="4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" HandleID="k8s-pod-network.4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:21.385780 containerd[1455]: 2025-01-30 13:54:21.318 [INFO][4278] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-9bswl" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0", GenerateName:"calico-apiserver-759d756c8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"abf0d867-27ff-456c-8a63-367b3c10edb1", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759d756c8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-759d756c8b-9bswl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71d3ddf9f3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:21.385780 containerd[1455]: 2025-01-30 13:54:21.318 [INFO][4278] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.4/32] ContainerID="4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-9bswl" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:21.385780 containerd[1455]: 2025-01-30 13:54:21.318 [INFO][4278] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71d3ddf9f3e ContainerID="4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-9bswl" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:21.385780 containerd[1455]: 2025-01-30 13:54:21.328 [INFO][4278] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-9bswl" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:21.385780 containerd[1455]: 2025-01-30 13:54:21.330 [INFO][4278] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-9bswl" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0", GenerateName:"calico-apiserver-759d756c8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"abf0d867-27ff-456c-8a63-367b3c10edb1", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759d756c8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c", Pod:"calico-apiserver-759d756c8b-9bswl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71d3ddf9f3e", MAC:"a2:e1:32:69:49:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:21.385780 containerd[1455]: 2025-01-30 13:54:21.376 [INFO][4278] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c" Namespace="calico-apiserver" Pod="calico-apiserver-759d756c8b-9bswl" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:21.464761 containerd[1455]: time="2025-01-30T13:54:21.461955033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:21.464761 containerd[1455]: time="2025-01-30T13:54:21.464364887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:21.464761 containerd[1455]: time="2025-01-30T13:54:21.464388596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:21.474362 containerd[1455]: time="2025-01-30T13:54:21.469246928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:21.528373 systemd[1]: Started cri-containerd-4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c.scope - libcontainer container 4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c. Jan 30 13:54:21.769191 systemd-networkd[1367]: calif17d5893b21: Gained IPv6LL Jan 30 13:54:21.820407 containerd[1455]: time="2025-01-30T13:54:21.819539354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759d756c8b-9bswl,Uid:abf0d867-27ff-456c-8a63-367b3c10edb1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c\"" Jan 30 13:54:21.832633 systemd-networkd[1367]: caliedd4df1a695: Gained IPv6LL Jan 30 13:54:22.060535 kernel: bpftool[4393]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:54:22.626805 systemd-networkd[1367]: vxlan.calico: Link UP Jan 30 13:54:22.626821 systemd-networkd[1367]: vxlan.calico: Gained carrier Jan 30 13:54:22.934829 containerd[1455]: time="2025-01-30T13:54:22.933686774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:22.936638 containerd[1455]: time="2025-01-30T13:54:22.936572435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:54:22.938316 containerd[1455]: time="2025-01-30T13:54:22.938249227Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:22.944176 containerd[1455]: time="2025-01-30T13:54:22.943467780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:22.945647 containerd[1455]: time="2025-01-30T13:54:22.945563696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.027064593s" Jan 30 13:54:22.945825 containerd[1455]: time="2025-01-30T13:54:22.945797300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:54:22.948740 containerd[1455]: time="2025-01-30T13:54:22.948241943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:54:22.950853 containerd[1455]: time="2025-01-30T13:54:22.950809825Z" level=info msg="CreateContainer within sandbox \"7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:54:22.977225 containerd[1455]: time="2025-01-30T13:54:22.977159074Z" level=info msg="CreateContainer within sandbox \"7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e9dc656a708bad5a420155217c494c6b767b203f50f092cc65b462e7a2980a66\"" Jan 30 13:54:22.978628 containerd[1455]: time="2025-01-30T13:54:22.978291384Z" level=info msg="StartContainer for \"e9dc656a708bad5a420155217c494c6b767b203f50f092cc65b462e7a2980a66\"" Jan 30 13:54:23.050373 systemd[1]: Started cri-containerd-e9dc656a708bad5a420155217c494c6b767b203f50f092cc65b462e7a2980a66.scope - libcontainer container e9dc656a708bad5a420155217c494c6b767b203f50f092cc65b462e7a2980a66. Jan 30 13:54:23.050438 systemd-networkd[1367]: cali71d3ddf9f3e: Gained IPv6LL Jan 30 13:54:23.135386 containerd[1455]: time="2025-01-30T13:54:23.134963146Z" level=info msg="StartContainer for \"e9dc656a708bad5a420155217c494c6b767b203f50f092cc65b462e7a2980a66\" returns successfully" Jan 30 13:54:23.431934 containerd[1455]: time="2025-01-30T13:54:23.431397682Z" level=info msg="StopPodSandbox for \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\"" Jan 30 13:54:23.431934 containerd[1455]: time="2025-01-30T13:54:23.431577369Z" level=info msg="StopPodSandbox for \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\"" Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.591 [INFO][4545] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.591 [INFO][4545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" iface="eth0" netns="/var/run/netns/cni-f031d398-3c65-c2ab-e9a9-76f245b6773c" Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.592 [INFO][4545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" iface="eth0" netns="/var/run/netns/cni-f031d398-3c65-c2ab-e9a9-76f245b6773c" Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.592 [INFO][4545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" iface="eth0" netns="/var/run/netns/cni-f031d398-3c65-c2ab-e9a9-76f245b6773c" Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.592 [INFO][4545] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.592 [INFO][4545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.678 [INFO][4560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" HandleID="k8s-pod-network.48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.678 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.678 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.697 [WARNING][4560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" HandleID="k8s-pod-network.48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.700 [INFO][4560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" HandleID="k8s-pod-network.48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.708 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:23.721656 containerd[1455]: 2025-01-30 13:54:23.713 [INFO][4545] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:23.727377 containerd[1455]: time="2025-01-30T13:54:23.726292847Z" level=info msg="TearDown network for sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\" successfully" Jan 30 13:54:23.727377 containerd[1455]: time="2025-01-30T13:54:23.727213691Z" level=info msg="StopPodSandbox for \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\" returns successfully" Jan 30 13:54:23.730171 containerd[1455]: time="2025-01-30T13:54:23.729364169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hs4d,Uid:4ce51f87-697e-49a4-af41-1b0a623704f3,Namespace:calico-system,Attempt:1,}" Jan 30 13:54:23.732600 systemd[1]: run-netns-cni\x2df031d398\x2d3c65\x2dc2ab\x2de9a9\x2d76f245b6773c.mount: Deactivated successfully. Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.611 [INFO][4552] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.612 [INFO][4552] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" iface="eth0" netns="/var/run/netns/cni-5579a228-622d-f919-2922-8baa73eefef1" Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.612 [INFO][4552] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" iface="eth0" netns="/var/run/netns/cni-5579a228-622d-f919-2922-8baa73eefef1" Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.614 [INFO][4552] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" iface="eth0" netns="/var/run/netns/cni-5579a228-622d-f919-2922-8baa73eefef1" Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.614 [INFO][4552] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.614 [INFO][4552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.711 [INFO][4564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" HandleID="k8s-pod-network.53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.712 [INFO][4564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.713 [INFO][4564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.745 [WARNING][4564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" HandleID="k8s-pod-network.53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.745 [INFO][4564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" HandleID="k8s-pod-network.53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.749 [INFO][4564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:23.754687 containerd[1455]: 2025-01-30 13:54:23.752 [INFO][4552] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:23.756019 containerd[1455]: time="2025-01-30T13:54:23.755229281Z" level=info msg="TearDown network for sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\" successfully" Jan 30 13:54:23.756019 containerd[1455]: time="2025-01-30T13:54:23.755292007Z" level=info msg="StopPodSandbox for \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\" returns successfully" Jan 30 13:54:23.759857 containerd[1455]: time="2025-01-30T13:54:23.759583841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7465,Uid:9f3c7838-002a-42e5-a748-e9ea78b103bd,Namespace:kube-system,Attempt:1,}" Jan 30 13:54:23.768729 systemd[1]: run-netns-cni\x2d5579a228\x2d622d\x2df919\x2d2922\x2d8baa73eefef1.mount: Deactivated successfully. Jan 30 13:54:24.200287 systemd-networkd[1367]: cali15404cc956a: Link UP Jan 30 13:54:24.204388 systemd-networkd[1367]: cali15404cc956a: Gained carrier Jan 30 13:54:24.237184 kubelet[2606]: I0130 13:54:24.235704 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-759d756c8b-wvvmt" podStartSLOduration=25.205972207 podStartE2EDuration="29.235674406s" podCreationTimestamp="2025-01-30 13:53:55 +0000 UTC" firstStartedPulling="2025-01-30 13:54:18.917469158 +0000 UTC m=+45.660968268" lastFinishedPulling="2025-01-30 13:54:22.947171352 +0000 UTC m=+49.690670467" observedRunningTime="2025-01-30 13:54:23.875801528 +0000 UTC m=+50.619300654" watchObservedRunningTime="2025-01-30 13:54:24.235674406 +0000 UTC m=+50.979173531" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:23.948 [INFO][4583] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0 coredns-7db6d8ff4d- kube-system 9f3c7838-002a-42e5-a748-e9ea78b103bd 846 0 2025-01-30 13:53:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal coredns-7db6d8ff4d-v7465 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali15404cc956a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7465" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:23.948 [INFO][4583] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7465" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.099 [INFO][4601] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" HandleID="k8s-pod-network.e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.119 [INFO][4601] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" HandleID="k8s-pod-network.e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003954a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-v7465", "timestamp":"2025-01-30 13:54:24.099799082 +0000 UTC"}, Hostname:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.119 [INFO][4601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.119 [INFO][4601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.119 [INFO][4601] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal' Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.122 [INFO][4601] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.129 [INFO][4601] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.146 [INFO][4601] ipam/ipam.go 489: Trying affinity for 192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.149 [INFO][4601] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.154 [INFO][4601] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.154 [INFO][4601] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.157 [INFO][4601] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305 Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.170 [INFO][4601] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.184 [INFO][4601] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.5/26] block=192.168.80.0/26 handle="k8s-pod-network.e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.184 [INFO][4601] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.5/26] handle="k8s-pod-network.e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.184 [INFO][4601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:24.243019 containerd[1455]: 2025-01-30 13:54:24.184 [INFO][4601] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.5/26] IPv6=[] ContainerID="e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" HandleID="k8s-pod-network.e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:24.245654 containerd[1455]: 2025-01-30 13:54:24.191 [INFO][4583] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7465" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9f3c7838-002a-42e5-a748-e9ea78b103bd", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-v7465", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali15404cc956a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:24.245654 containerd[1455]: 2025-01-30 13:54:24.192 [INFO][4583] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.5/32] ContainerID="e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7465" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:24.245654 containerd[1455]: 2025-01-30 13:54:24.192 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15404cc956a ContainerID="e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7465" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:24.245654 containerd[1455]: 2025-01-30 13:54:24.204 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7465" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:24.245654 containerd[1455]: 2025-01-30 13:54:24.207 [INFO][4583] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7465" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9f3c7838-002a-42e5-a748-e9ea78b103bd", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305", Pod:"coredns-7db6d8ff4d-v7465", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali15404cc956a", MAC:"7a:36:45:c4:72:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:24.245654 containerd[1455]: 2025-01-30 13:54:24.232 [INFO][4583] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v7465" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:24.303463 systemd-networkd[1367]: cali70bfa99dcaf: Link UP Jan 30 13:54:24.304558 systemd-networkd[1367]: cali70bfa99dcaf: Gained carrier Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:23.984 [INFO][4574] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0 csi-node-driver- calico-system 4ce51f87-697e-49a4-af41-1b0a623704f3 845 0 2025-01-30 13:53:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal csi-node-driver-9hs4d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali70bfa99dcaf [] []}} ContainerID="b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" Namespace="calico-system" Pod="csi-node-driver-9hs4d" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:23.985 [INFO][4574] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" Namespace="calico-system" Pod="csi-node-driver-9hs4d" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.130 [INFO][4605] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" HandleID="k8s-pod-network.b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.150 [INFO][4605] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" HandleID="k8s-pod-network.b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011c9f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", "pod":"csi-node-driver-9hs4d", "timestamp":"2025-01-30 13:54:24.13087747 +0000 UTC"}, Hostname:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.150 [INFO][4605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.185 [INFO][4605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.185 [INFO][4605] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal' Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.189 [INFO][4605] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.202 [INFO][4605] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.222 [INFO][4605] ipam/ipam.go 489: Trying affinity for 192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.231 [INFO][4605] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.241 [INFO][4605] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.242 [INFO][4605] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.250 [INFO][4605] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660 Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.264 [INFO][4605] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.280 [INFO][4605] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.6/26] block=192.168.80.0/26 handle="k8s-pod-network.b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.280 [INFO][4605] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.6/26] handle="k8s-pod-network.b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" host="ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal" Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.281 [INFO][4605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:24.360115 containerd[1455]: 2025-01-30 13:54:24.281 [INFO][4605] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.6/26] IPv6=[] ContainerID="b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" HandleID="k8s-pod-network.b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:24.363550 containerd[1455]: 2025-01-30 13:54:24.287 [INFO][4574] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" Namespace="calico-system" Pod="csi-node-driver-9hs4d" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4ce51f87-697e-49a4-af41-1b0a623704f3", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-9hs4d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70bfa99dcaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:24.363550 containerd[1455]: 2025-01-30 13:54:24.287 [INFO][4574] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.6/32] ContainerID="b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" Namespace="calico-system" Pod="csi-node-driver-9hs4d" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:24.363550 containerd[1455]: 2025-01-30 13:54:24.287 [INFO][4574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70bfa99dcaf ContainerID="b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" Namespace="calico-system" Pod="csi-node-driver-9hs4d" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:24.363550 containerd[1455]: 2025-01-30 13:54:24.302 [INFO][4574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" Namespace="calico-system" Pod="csi-node-driver-9hs4d" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:24.363550 containerd[1455]: 2025-01-30 13:54:24.307 [INFO][4574] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" Namespace="calico-system" Pod="csi-node-driver-9hs4d" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4ce51f87-697e-49a4-af41-1b0a623704f3", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660", Pod:"csi-node-driver-9hs4d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70bfa99dcaf", MAC:"66:59:a0:2e:70:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:24.363550 containerd[1455]: 2025-01-30 13:54:24.345 [INFO][4574] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660" Namespace="calico-system" Pod="csi-node-driver-9hs4d" WorkloadEndpoint="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:24.378329 containerd[1455]: time="2025-01-30T13:54:24.375667121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:24.379514 containerd[1455]: time="2025-01-30T13:54:24.379123701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:24.379514 containerd[1455]: time="2025-01-30T13:54:24.379179778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:24.393162 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Jan 30 13:54:24.395320 containerd[1455]: time="2025-01-30T13:54:24.381151357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:24.496046 systemd[1]: Started cri-containerd-e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305.scope - libcontainer container e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305. Jan 30 13:54:24.519483 containerd[1455]: time="2025-01-30T13:54:24.519153300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:24.519483 containerd[1455]: time="2025-01-30T13:54:24.519269592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:24.519483 containerd[1455]: time="2025-01-30T13:54:24.519295547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:24.519483 containerd[1455]: time="2025-01-30T13:54:24.519427148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:24.596815 systemd[1]: Started cri-containerd-b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660.scope - libcontainer container b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660. Jan 30 13:54:24.672711 containerd[1455]: time="2025-01-30T13:54:24.672627846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7465,Uid:9f3c7838-002a-42e5-a748-e9ea78b103bd,Namespace:kube-system,Attempt:1,} returns sandbox id \"e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305\"" Jan 30 13:54:24.686846 containerd[1455]: time="2025-01-30T13:54:24.686644449Z" level=info msg="CreateContainer within sandbox \"e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:54:24.739821 containerd[1455]: time="2025-01-30T13:54:24.738832075Z" level=info msg="CreateContainer within sandbox \"e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91f48bbbaed9060fe209016a22720a25f1c31767de04301a6f0fbea5d1345b7d\"" Jan 30 13:54:24.742974 containerd[1455]: time="2025-01-30T13:54:24.741229149Z" level=info msg="StartContainer for \"91f48bbbaed9060fe209016a22720a25f1c31767de04301a6f0fbea5d1345b7d\"" Jan 30 13:54:24.778916 containerd[1455]: time="2025-01-30T13:54:24.777592801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hs4d,Uid:4ce51f87-697e-49a4-af41-1b0a623704f3,Namespace:calico-system,Attempt:1,} returns sandbox id \"b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660\"" Jan 30 13:54:24.851673 systemd[1]: Started cri-containerd-91f48bbbaed9060fe209016a22720a25f1c31767de04301a6f0fbea5d1345b7d.scope - libcontainer container 91f48bbbaed9060fe209016a22720a25f1c31767de04301a6f0fbea5d1345b7d. Jan 30 13:54:24.948804 containerd[1455]: time="2025-01-30T13:54:24.948564779Z" level=info msg="StartContainer for \"91f48bbbaed9060fe209016a22720a25f1c31767de04301a6f0fbea5d1345b7d\" returns successfully" Jan 30 13:54:25.864952 systemd-networkd[1367]: cali15404cc956a: Gained IPv6LL Jan 30 13:54:25.945113 kubelet[2606]: I0130 13:54:25.943044 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v7465" podStartSLOduration=37.943019151 podStartE2EDuration="37.943019151s" podCreationTimestamp="2025-01-30 13:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:54:25.918015179 +0000 UTC m=+52.661514344" watchObservedRunningTime="2025-01-30 13:54:25.943019151 +0000 UTC m=+52.686518279" Jan 30 13:54:26.249707 systemd-networkd[1367]: cali70bfa99dcaf: Gained IPv6LL Jan 30 13:54:26.271670 containerd[1455]: time="2025-01-30T13:54:26.271491030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:26.273883 containerd[1455]: time="2025-01-30T13:54:26.273618380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:54:26.275656 containerd[1455]: time="2025-01-30T13:54:26.275609223Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:26.281102 containerd[1455]: time="2025-01-30T13:54:26.279935204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:26.281769 containerd[1455]: time="2025-01-30T13:54:26.281712943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.33342821s" Jan 30 13:54:26.281876 containerd[1455]: time="2025-01-30T13:54:26.281774637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:54:26.284381 containerd[1455]: time="2025-01-30T13:54:26.284349831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:54:26.326402 containerd[1455]: time="2025-01-30T13:54:26.321941367Z" level=info msg="CreateContainer within sandbox \"0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:54:26.346054 containerd[1455]: time="2025-01-30T13:54:26.345995378Z" level=info msg="CreateContainer within sandbox \"0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3939d29976779cb5947af6c83c1e9ecf58d62c3c54b6aecab25b4eb4380889e4\"" Jan 30 13:54:26.347807 containerd[1455]: time="2025-01-30T13:54:26.347756426Z" level=info msg="StartContainer for \"3939d29976779cb5947af6c83c1e9ecf58d62c3c54b6aecab25b4eb4380889e4\"" Jan 30 13:54:26.418320 systemd[1]: Started cri-containerd-3939d29976779cb5947af6c83c1e9ecf58d62c3c54b6aecab25b4eb4380889e4.scope - libcontainer container 3939d29976779cb5947af6c83c1e9ecf58d62c3c54b6aecab25b4eb4380889e4. Jan 30 13:54:26.504271 containerd[1455]: time="2025-01-30T13:54:26.504123121Z" level=info msg="StartContainer for \"3939d29976779cb5947af6c83c1e9ecf58d62c3c54b6aecab25b4eb4380889e4\" returns successfully" Jan 30 13:54:26.528413 containerd[1455]: time="2025-01-30T13:54:26.528349727Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:26.534826 containerd[1455]: time="2025-01-30T13:54:26.534756317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:54:26.541512 containerd[1455]: time="2025-01-30T13:54:26.541072008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 256.522153ms" Jan 30 13:54:26.541512 containerd[1455]: time="2025-01-30T13:54:26.541214655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:54:26.544579 containerd[1455]: time="2025-01-30T13:54:26.544532823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:54:26.546102 containerd[1455]: time="2025-01-30T13:54:26.546045016Z" level=info msg="CreateContainer within sandbox \"4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:54:26.564496 containerd[1455]: time="2025-01-30T13:54:26.564435632Z" level=info msg="CreateContainer within sandbox \"4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"77399b6e51c97e5f8228548d9692983e45f0c382d048a26ee3bc7dd353277416\"" Jan 30 13:54:26.567147 containerd[1455]: time="2025-01-30T13:54:26.566035545Z" level=info msg="StartContainer for \"77399b6e51c97e5f8228548d9692983e45f0c382d048a26ee3bc7dd353277416\"" Jan 30 13:54:26.639135 systemd[1]: Started cri-containerd-77399b6e51c97e5f8228548d9692983e45f0c382d048a26ee3bc7dd353277416.scope - libcontainer container 77399b6e51c97e5f8228548d9692983e45f0c382d048a26ee3bc7dd353277416. Jan 30 13:54:26.743871 containerd[1455]: time="2025-01-30T13:54:26.743774647Z" level=info msg="StartContainer for \"77399b6e51c97e5f8228548d9692983e45f0c382d048a26ee3bc7dd353277416\" returns successfully" Jan 30 13:54:26.928254 kubelet[2606]: I0130 13:54:26.926995 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-759d756c8b-9bswl" podStartSLOduration=27.209628334 podStartE2EDuration="31.926861752s" podCreationTimestamp="2025-01-30 13:53:55 +0000 UTC" firstStartedPulling="2025-01-30 13:54:21.826152777 +0000 UTC m=+48.569651893" lastFinishedPulling="2025-01-30 13:54:26.543386193 +0000 UTC m=+53.286885311" observedRunningTime="2025-01-30 13:54:26.925835405 +0000 UTC m=+53.669334534" watchObservedRunningTime="2025-01-30 13:54:26.926861752 +0000 UTC m=+53.670360879" Jan 30 13:54:27.177659 kubelet[2606]: I0130 13:54:27.177547 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c84f96c9b-fzvvr" podStartSLOduration=25.400553222 podStartE2EDuration="31.177521522s" podCreationTimestamp="2025-01-30 13:53:56 +0000 UTC" firstStartedPulling="2025-01-30 13:54:20.506267693 +0000 UTC m=+47.249766801" lastFinishedPulling="2025-01-30 13:54:26.283235985 +0000 UTC m=+53.026735101" observedRunningTime="2025-01-30 13:54:26.952224871 +0000 UTC m=+53.695723999" watchObservedRunningTime="2025-01-30 13:54:27.177521522 +0000 UTC m=+53.921020652" Jan 30 13:54:27.307811 systemd[1]: run-containerd-runc-k8s.io-3939d29976779cb5947af6c83c1e9ecf58d62c3c54b6aecab25b4eb4380889e4-runc.wcZf9T.mount: Deactivated successfully. Jan 30 13:54:27.724654 containerd[1455]: time="2025-01-30T13:54:27.724506195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:27.727172 containerd[1455]: time="2025-01-30T13:54:27.727062530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:54:27.729616 containerd[1455]: time="2025-01-30T13:54:27.728740636Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:27.733310 containerd[1455]: time="2025-01-30T13:54:27.732930004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:27.735768 containerd[1455]: time="2025-01-30T13:54:27.735709108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.191117947s" Jan 30 13:54:27.735960 containerd[1455]: time="2025-01-30T13:54:27.735932830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:54:27.739913 containerd[1455]: time="2025-01-30T13:54:27.739875499Z" level=info msg="CreateContainer within sandbox \"b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:54:27.775417 containerd[1455]: time="2025-01-30T13:54:27.775364124Z" level=info msg="CreateContainer within sandbox \"b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8793fad97b996a2a1942a428ef6bac39bd12165f8c975dbf945f0109fd38d7b4\"" Jan 30 13:54:27.778185 containerd[1455]: time="2025-01-30T13:54:27.777607965Z" level=info msg="StartContainer for \"8793fad97b996a2a1942a428ef6bac39bd12165f8c975dbf945f0109fd38d7b4\"" Jan 30 13:54:27.782353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041123187.mount: Deactivated successfully. Jan 30 13:54:27.858756 systemd[1]: Started cri-containerd-8793fad97b996a2a1942a428ef6bac39bd12165f8c975dbf945f0109fd38d7b4.scope - libcontainer container 8793fad97b996a2a1942a428ef6bac39bd12165f8c975dbf945f0109fd38d7b4. Jan 30 13:54:27.920132 containerd[1455]: time="2025-01-30T13:54:27.919565750Z" level=info msg="StartContainer for \"8793fad97b996a2a1942a428ef6bac39bd12165f8c975dbf945f0109fd38d7b4\" returns successfully" Jan 30 13:54:27.923740 containerd[1455]: time="2025-01-30T13:54:27.923293694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:54:29.200605 ntpd[1424]: Listen normally on 8 vxlan.calico 192.168.80.0:123 Jan 30 13:54:29.201518 ntpd[1424]: Listen normally on 9 calia67f28586be [fe80::ecee:eeff:feee:eeee%4]:123 Jan 30 13:54:29.203191 ntpd[1424]: 30 Jan 13:54:29 ntpd[1424]: Listen normally on 8 vxlan.calico 192.168.80.0:123 Jan 30 13:54:29.203191 ntpd[1424]: 30 Jan 13:54:29 ntpd[1424]: Listen normally on 9 calia67f28586be [fe80::ecee:eeff:feee:eeee%4]:123 Jan 30 13:54:29.203191 ntpd[1424]: 30 Jan 13:54:29 ntpd[1424]: Listen normally on 10 calif17d5893b21 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 30 13:54:29.202576 ntpd[1424]: Listen normally on 10 calif17d5893b21 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 30 13:54:29.202661 ntpd[1424]: Listen normally on 11 caliedd4df1a695 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 30 13:54:29.205238 ntpd[1424]: 30 Jan 13:54:29 ntpd[1424]: Listen normally on 11 caliedd4df1a695 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 30 13:54:29.205238 ntpd[1424]: 30 Jan 13:54:29 ntpd[1424]: Listen normally on 12 cali71d3ddf9f3e [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:54:29.205238 ntpd[1424]: 30 Jan 13:54:29 ntpd[1424]: Listen normally on 13 vxlan.calico [fe80::6422:b7ff:fed1:1de7%8]:123 Jan 30 13:54:29.205238 ntpd[1424]: 30 Jan 13:54:29 ntpd[1424]: Listen normally on 14 cali15404cc956a [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 13:54:29.205238 ntpd[1424]: 30 Jan 13:54:29 ntpd[1424]: Listen normally on 15 cali70bfa99dcaf [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 13:54:29.203868 ntpd[1424]: Listen normally on 12 cali71d3ddf9f3e [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:54:29.203966 ntpd[1424]: Listen normally on 13 vxlan.calico [fe80::6422:b7ff:fed1:1de7%8]:123 Jan 30 13:54:29.204055 ntpd[1424]: Listen normally on 14 cali15404cc956a [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 13:54:29.204211 ntpd[1424]: Listen normally on 15 cali70bfa99dcaf [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 13:54:29.351584 containerd[1455]: time="2025-01-30T13:54:29.351485684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:29.352899 containerd[1455]: time="2025-01-30T13:54:29.352837646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:54:29.354204 containerd[1455]: time="2025-01-30T13:54:29.354110409Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:29.357235 containerd[1455]: time="2025-01-30T13:54:29.357151091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:29.358351 containerd[1455]: time="2025-01-30T13:54:29.358303669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.434922015s" Jan 30 13:54:29.358462 containerd[1455]: time="2025-01-30T13:54:29.358357517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:54:29.362312 containerd[1455]: time="2025-01-30T13:54:29.362268929Z" level=info msg="CreateContainer within sandbox \"b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:54:29.383008 containerd[1455]: time="2025-01-30T13:54:29.382791006Z" level=info msg="CreateContainer within sandbox \"b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"faa6a49239e0abcd5565258af136c355e101374553c4855402d54375ad13b9d8\"" Jan 30 13:54:29.384316 containerd[1455]: time="2025-01-30T13:54:29.384264348Z" level=info msg="StartContainer for \"faa6a49239e0abcd5565258af136c355e101374553c4855402d54375ad13b9d8\"" Jan 30 13:54:29.478332 systemd[1]: Started cri-containerd-faa6a49239e0abcd5565258af136c355e101374553c4855402d54375ad13b9d8.scope - libcontainer container faa6a49239e0abcd5565258af136c355e101374553c4855402d54375ad13b9d8. Jan 30 13:54:29.550419 containerd[1455]: time="2025-01-30T13:54:29.549816540Z" level=info msg="StartContainer for \"faa6a49239e0abcd5565258af136c355e101374553c4855402d54375ad13b9d8\" returns successfully" Jan 30 13:54:29.594183 kubelet[2606]: I0130 13:54:29.593469 2606 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:54:29.594183 kubelet[2606]: I0130 13:54:29.593523 2606 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:54:29.953970 kubelet[2606]: I0130 13:54:29.953560 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9hs4d" podStartSLOduration=29.378446996 podStartE2EDuration="33.953532602s" podCreationTimestamp="2025-01-30 13:53:56 +0000 UTC" firstStartedPulling="2025-01-30 13:54:24.784605902 +0000 UTC m=+51.528105017" lastFinishedPulling="2025-01-30 13:54:29.3596915 +0000 UTC m=+56.103190623" observedRunningTime="2025-01-30 13:54:29.952836096 +0000 UTC m=+56.696335221" watchObservedRunningTime="2025-01-30 13:54:29.953532602 +0000 UTC m=+56.697031730" Jan 30 13:54:33.441544 containerd[1455]: time="2025-01-30T13:54:33.441487521Z" level=info msg="StopPodSandbox for \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\"" Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.496 [WARNING][4979] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"042a4ac1-a034-4cca-8cee-8de63b6b51bd", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669", Pod:"coredns-7db6d8ff4d-dtpgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif17d5893b21", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.496 [INFO][4979] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.496 [INFO][4979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" iface="eth0" netns="" Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.496 [INFO][4979] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.496 [INFO][4979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.525 [INFO][4987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" HandleID="k8s-pod-network.11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.525 [INFO][4987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.525 [INFO][4987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.533 [WARNING][4987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" HandleID="k8s-pod-network.11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.534 [INFO][4987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" HandleID="k8s-pod-network.11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.536 [INFO][4987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:33.539365 containerd[1455]: 2025-01-30 13:54:33.538 [INFO][4979] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:33.540482 containerd[1455]: time="2025-01-30T13:54:33.539376435Z" level=info msg="TearDown network for sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\" successfully" Jan 30 13:54:33.540482 containerd[1455]: time="2025-01-30T13:54:33.539415874Z" level=info msg="StopPodSandbox for \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\" returns successfully" Jan 30 13:54:33.540593 containerd[1455]: time="2025-01-30T13:54:33.540490626Z" level=info msg="RemovePodSandbox for \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\"" Jan 30 13:54:33.540593 containerd[1455]: time="2025-01-30T13:54:33.540530001Z" level=info msg="Forcibly stopping sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\"" Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.604 [WARNING][5006] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"042a4ac1-a034-4cca-8cee-8de63b6b51bd", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"1ff38e3d016642c75e5530fd552637dedbfa59e4536c42509e533713010e3669", Pod:"coredns-7db6d8ff4d-dtpgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif17d5893b21", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.604 [INFO][5006] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.604 [INFO][5006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" iface="eth0" netns="" Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.604 [INFO][5006] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.604 [INFO][5006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.634 [INFO][5013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" HandleID="k8s-pod-network.11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.634 [INFO][5013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.634 [INFO][5013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.640 [WARNING][5013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" HandleID="k8s-pod-network.11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.640 [INFO][5013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" HandleID="k8s-pod-network.11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--dtpgm-eth0" Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.642 [INFO][5013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:33.645019 containerd[1455]: 2025-01-30 13:54:33.643 [INFO][5006] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba" Jan 30 13:54:33.645019 containerd[1455]: time="2025-01-30T13:54:33.644985514Z" level=info msg="TearDown network for sandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\" successfully" Jan 30 13:54:33.650611 containerd[1455]: time="2025-01-30T13:54:33.650562548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:33.650774 containerd[1455]: time="2025-01-30T13:54:33.650666813Z" level=info msg="RemovePodSandbox \"11b429ab3982d93c6f166a7265664f9c8d4d603f59d66d3ae937f9eee01230ba\" returns successfully" Jan 30 13:54:33.651443 containerd[1455]: time="2025-01-30T13:54:33.651406969Z" level=info msg="StopPodSandbox for \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\"" Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.696 [WARNING][5031] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0", GenerateName:"calico-apiserver-759d756c8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"abf0d867-27ff-456c-8a63-367b3c10edb1", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759d756c8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c", Pod:"calico-apiserver-759d756c8b-9bswl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71d3ddf9f3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.696 [INFO][5031] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.696 [INFO][5031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" iface="eth0" netns="" Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.696 [INFO][5031] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.696 [INFO][5031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.724 [INFO][5038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" HandleID="k8s-pod-network.7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.725 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.725 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.732 [WARNING][5038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" HandleID="k8s-pod-network.7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.732 [INFO][5038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" HandleID="k8s-pod-network.7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.734 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:33.736788 containerd[1455]: 2025-01-30 13:54:33.735 [INFO][5031] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:33.736788 containerd[1455]: time="2025-01-30T13:54:33.736632445Z" level=info msg="TearDown network for sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\" successfully" Jan 30 13:54:33.736788 containerd[1455]: time="2025-01-30T13:54:33.736667458Z" level=info msg="StopPodSandbox for \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\" returns successfully" Jan 30 13:54:33.739911 containerd[1455]: time="2025-01-30T13:54:33.737939281Z" level=info msg="RemovePodSandbox for \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\"" Jan 30 13:54:33.739911 containerd[1455]: time="2025-01-30T13:54:33.738117989Z" level=info msg="Forcibly stopping sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\"" Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.783 [WARNING][5056] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0", GenerateName:"calico-apiserver-759d756c8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"abf0d867-27ff-456c-8a63-367b3c10edb1", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759d756c8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"4e4d6429dc3ec2672ea5f69b6d1005132932deca4d3f93aea1db026191db766c", Pod:"calico-apiserver-759d756c8b-9bswl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71d3ddf9f3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.784 [INFO][5056] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.784 [INFO][5056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" iface="eth0" netns="" Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.784 [INFO][5056] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.784 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.813 [INFO][5062] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" HandleID="k8s-pod-network.7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.813 [INFO][5062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.813 [INFO][5062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.826 [WARNING][5062] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" HandleID="k8s-pod-network.7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.827 [INFO][5062] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" HandleID="k8s-pod-network.7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--9bswl-eth0" Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.832 [INFO][5062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:33.837442 containerd[1455]: 2025-01-30 13:54:33.835 [INFO][5056] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d" Jan 30 13:54:33.837442 containerd[1455]: time="2025-01-30T13:54:33.837539657Z" level=info msg="TearDown network for sandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\" successfully" Jan 30 13:54:33.844128 containerd[1455]: time="2025-01-30T13:54:33.843250414Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:33.844128 containerd[1455]: time="2025-01-30T13:54:33.843383816Z" level=info msg="RemovePodSandbox \"7384c721c6209e76312a9ed5ffb60f80c5ace228f9257f0549adc8849e17080d\" returns successfully" Jan 30 13:54:33.845197 containerd[1455]: time="2025-01-30T13:54:33.844850833Z" level=info msg="StopPodSandbox for \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\"" Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.898 [WARNING][5082] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4ce51f87-697e-49a4-af41-1b0a623704f3", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660", Pod:"csi-node-driver-9hs4d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70bfa99dcaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.898 [INFO][5082] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.899 [INFO][5082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" iface="eth0" netns="" Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.899 [INFO][5082] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.899 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.926 [INFO][5089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" HandleID="k8s-pod-network.48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.927 [INFO][5089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.927 [INFO][5089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.936 [WARNING][5089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" HandleID="k8s-pod-network.48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.936 [INFO][5089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" HandleID="k8s-pod-network.48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.939 [INFO][5089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:33.941779 containerd[1455]: 2025-01-30 13:54:33.940 [INFO][5082] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:33.943021 containerd[1455]: time="2025-01-30T13:54:33.941830650Z" level=info msg="TearDown network for sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\" successfully" Jan 30 13:54:33.943021 containerd[1455]: time="2025-01-30T13:54:33.941864245Z" level=info msg="StopPodSandbox for \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\" returns successfully" Jan 30 13:54:33.943190 containerd[1455]: time="2025-01-30T13:54:33.943056354Z" level=info msg="RemovePodSandbox for \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\"" Jan 30 13:54:33.943190 containerd[1455]: time="2025-01-30T13:54:33.943109518Z" level=info msg="Forcibly stopping sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\"" Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:33.996 [WARNING][5107] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4ce51f87-697e-49a4-af41-1b0a623704f3", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"b8fd8c3d08bea44bb4a4c1ec333dbe9fed32b5231ce1f29074ed1476e84fa660", Pod:"csi-node-driver-9hs4d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70bfa99dcaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:33.996 [INFO][5107] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:33.997 [INFO][5107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" iface="eth0" netns="" Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:33.997 [INFO][5107] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:33.997 [INFO][5107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:34.022 [INFO][5113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" HandleID="k8s-pod-network.48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:34.022 [INFO][5113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:34.022 [INFO][5113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:34.031 [WARNING][5113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" HandleID="k8s-pod-network.48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:34.031 [INFO][5113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" HandleID="k8s-pod-network.48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-csi--node--driver--9hs4d-eth0" Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:34.033 [INFO][5113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:34.035954 containerd[1455]: 2025-01-30 13:54:34.034 [INFO][5107] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52" Jan 30 13:54:34.035954 containerd[1455]: time="2025-01-30T13:54:34.035921693Z" level=info msg="TearDown network for sandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\" successfully" Jan 30 13:54:34.041523 containerd[1455]: time="2025-01-30T13:54:34.041473182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:34.041773 containerd[1455]: time="2025-01-30T13:54:34.041560336Z" level=info msg="RemovePodSandbox \"48ac7abad5771e7616a46bb86abb0af9521cb51df9be897e2ac44bf07bbf2b52\" returns successfully" Jan 30 13:54:34.042294 containerd[1455]: time="2025-01-30T13:54:34.042261149Z" level=info msg="StopPodSandbox for \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\"" Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.090 [WARNING][5131] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0", GenerateName:"calico-apiserver-759d756c8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b9b7590-9585-4f18-ab1c-6fd1a8042bb6", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759d756c8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce", Pod:"calico-apiserver-759d756c8b-wvvmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia67f28586be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.091 [INFO][5131] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.091 [INFO][5131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" iface="eth0" netns="" Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.091 [INFO][5131] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.091 [INFO][5131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.120 [INFO][5137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" HandleID="k8s-pod-network.0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.120 [INFO][5137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.121 [INFO][5137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.128 [WARNING][5137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" HandleID="k8s-pod-network.0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.128 [INFO][5137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" HandleID="k8s-pod-network.0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.129 [INFO][5137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:34.132502 containerd[1455]: 2025-01-30 13:54:34.131 [INFO][5131] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:34.133628 containerd[1455]: time="2025-01-30T13:54:34.132545941Z" level=info msg="TearDown network for sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\" successfully" Jan 30 13:54:34.133628 containerd[1455]: time="2025-01-30T13:54:34.132603772Z" level=info msg="StopPodSandbox for \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\" returns successfully" Jan 30 13:54:34.133628 containerd[1455]: time="2025-01-30T13:54:34.133329133Z" level=info msg="RemovePodSandbox for \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\"" Jan 30 13:54:34.133628 containerd[1455]: time="2025-01-30T13:54:34.133374204Z" level=info msg="Forcibly stopping sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\"" Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.178 [WARNING][5155] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0", GenerateName:"calico-apiserver-759d756c8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b9b7590-9585-4f18-ab1c-6fd1a8042bb6", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759d756c8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"7099044d60e9b7f4123feb95a931301be1fdf71bc24bb2f29f99b51ba5212bce", Pod:"calico-apiserver-759d756c8b-wvvmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia67f28586be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.178 [INFO][5155] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.178 [INFO][5155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" iface="eth0" netns="" Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.178 [INFO][5155] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.178 [INFO][5155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.208 [INFO][5161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" HandleID="k8s-pod-network.0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.208 [INFO][5161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.208 [INFO][5161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.218 [WARNING][5161] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" HandleID="k8s-pod-network.0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.218 [INFO][5161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" HandleID="k8s-pod-network.0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--apiserver--759d756c8b--wvvmt-eth0" Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.220 [INFO][5161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:34.223142 containerd[1455]: 2025-01-30 13:54:34.221 [INFO][5155] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2" Jan 30 13:54:34.224016 containerd[1455]: time="2025-01-30T13:54:34.223206872Z" level=info msg="TearDown network for sandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\" successfully" Jan 30 13:54:34.228484 containerd[1455]: time="2025-01-30T13:54:34.228426607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:34.228836 containerd[1455]: time="2025-01-30T13:54:34.228543980Z" level=info msg="RemovePodSandbox \"0fe0fa8ebb5312494b37f2343a839baf0a3f3dee1a7c78bc523507cc19f56bd2\" returns successfully" Jan 30 13:54:34.229281 containerd[1455]: time="2025-01-30T13:54:34.229225248Z" level=info msg="StopPodSandbox for \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\"" Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.274 [WARNING][5179] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0", GenerateName:"calico-kube-controllers-7c84f96c9b-", Namespace:"calico-system", SelfLink:"", UID:"1df2e48e-0b26-4bea-a871-83ff0735e248", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c84f96c9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533", Pod:"calico-kube-controllers-7c84f96c9b-fzvvr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedd4df1a695", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.275 [INFO][5179] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.275 [INFO][5179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" iface="eth0" netns="" Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.275 [INFO][5179] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.275 [INFO][5179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.301 [INFO][5185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" HandleID="k8s-pod-network.041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.301 [INFO][5185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.301 [INFO][5185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.310 [WARNING][5185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" HandleID="k8s-pod-network.041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.310 [INFO][5185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" HandleID="k8s-pod-network.041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.313 [INFO][5185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:34.315525 containerd[1455]: 2025-01-30 13:54:34.314 [INFO][5179] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:34.316796 containerd[1455]: time="2025-01-30T13:54:34.315583290Z" level=info msg="TearDown network for sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\" successfully" Jan 30 13:54:34.316796 containerd[1455]: time="2025-01-30T13:54:34.315617839Z" level=info msg="StopPodSandbox for \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\" returns successfully" Jan 30 13:54:34.316796 containerd[1455]: time="2025-01-30T13:54:34.316288447Z" level=info msg="RemovePodSandbox for \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\"" Jan 30 13:54:34.316796 containerd[1455]: time="2025-01-30T13:54:34.316365585Z" level=info msg="Forcibly stopping sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\"" Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.367 [WARNING][5203] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0", GenerateName:"calico-kube-controllers-7c84f96c9b-", Namespace:"calico-system", SelfLink:"", UID:"1df2e48e-0b26-4bea-a871-83ff0735e248", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c84f96c9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"0677e4e82422433ca59517ba8dbb3b46a64622110779c76b6266d69ad03c3533", Pod:"calico-kube-controllers-7c84f96c9b-fzvvr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedd4df1a695", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.368 [INFO][5203] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.368 [INFO][5203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" iface="eth0" netns="" Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.368 [INFO][5203] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.368 [INFO][5203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.394 [INFO][5209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" HandleID="k8s-pod-network.041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.394 [INFO][5209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.394 [INFO][5209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.404 [WARNING][5209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" HandleID="k8s-pod-network.041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.405 [INFO][5209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" HandleID="k8s-pod-network.041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-calico--kube--controllers--7c84f96c9b--fzvvr-eth0" Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.406 [INFO][5209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:34.410212 containerd[1455]: 2025-01-30 13:54:34.408 [INFO][5203] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169" Jan 30 13:54:34.411680 containerd[1455]: time="2025-01-30T13:54:34.410274830Z" level=info msg="TearDown network for sandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\" successfully" Jan 30 13:54:34.415571 containerd[1455]: time="2025-01-30T13:54:34.415452829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:34.415571 containerd[1455]: time="2025-01-30T13:54:34.415552861Z" level=info msg="RemovePodSandbox \"041c6eca519b5697dcb5fa9d112e5ce67be26c99aeab7440248d1ed45c91e169\" returns successfully" Jan 30 13:54:34.416390 containerd[1455]: time="2025-01-30T13:54:34.416352993Z" level=info msg="StopPodSandbox for \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\"" Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.464 [WARNING][5227] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9f3c7838-002a-42e5-a748-e9ea78b103bd", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305", Pod:"coredns-7db6d8ff4d-v7465", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali15404cc956a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.464 [INFO][5227] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.464 [INFO][5227] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" iface="eth0" netns="" Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.464 [INFO][5227] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.464 [INFO][5227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.500 [INFO][5233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" HandleID="k8s-pod-network.53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.500 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.501 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.511 [WARNING][5233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" HandleID="k8s-pod-network.53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.511 [INFO][5233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" HandleID="k8s-pod-network.53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.513 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:34.515824 containerd[1455]: 2025-01-30 13:54:34.514 [INFO][5227] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:34.515824 containerd[1455]: time="2025-01-30T13:54:34.515816748Z" level=info msg="TearDown network for sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\" successfully" Jan 30 13:54:34.517177 containerd[1455]: time="2025-01-30T13:54:34.515852604Z" level=info msg="StopPodSandbox for \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\" returns successfully" Jan 30 13:54:34.517177 containerd[1455]: time="2025-01-30T13:54:34.516506323Z" level=info msg="RemovePodSandbox for \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\"" Jan 30 13:54:34.517177 containerd[1455]: time="2025-01-30T13:54:34.516548695Z" level=info msg="Forcibly stopping sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\"" Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.564 [WARNING][5253] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9f3c7838-002a-42e5-a748-e9ea78b103bd", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-cd2637fafdc80107349d.c.flatcar-212911.internal", ContainerID:"e6dc328da309a79eced9b5e27528c24865a40ccb1653ff1f4ec5f8aaf9ef6305", Pod:"coredns-7db6d8ff4d-v7465", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali15404cc956a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.603 [INFO][5253] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.603 [INFO][5253] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" iface="eth0" netns="" Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.603 [INFO][5253] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.603 [INFO][5253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.633 [INFO][5260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" HandleID="k8s-pod-network.53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.633 [INFO][5260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.633 [INFO][5260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.641 [WARNING][5260] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" HandleID="k8s-pod-network.53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.641 [INFO][5260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" HandleID="k8s-pod-network.53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Workload="ci--4081--3--0--cd2637fafdc80107349d.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--v7465-eth0" Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.643 [INFO][5260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:34.647190 containerd[1455]: 2025-01-30 13:54:34.644 [INFO][5253] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245" Jan 30 13:54:34.647190 containerd[1455]: time="2025-01-30T13:54:34.646295641Z" level=info msg="TearDown network for sandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\" successfully" Jan 30 13:54:34.940039 containerd[1455]: time="2025-01-30T13:54:34.938284992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:34.940039 containerd[1455]: time="2025-01-30T13:54:34.938391541Z" level=info msg="RemovePodSandbox \"53d4a9bb4266efc88ea95fa6dbb909c3f76fc6ac491b4843a7cb06a1fe1c3245\" returns successfully" Jan 30 13:54:40.110496 systemd[1]: Started sshd@7-10.128.0.23:22-139.178.68.195:38328.service - OpenSSH per-connection server daemon (139.178.68.195:38328). Jan 30 13:54:40.392940 sshd[5289]: Accepted publickey for core from 139.178.68.195 port 38328 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:54:40.394930 sshd[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:40.403819 systemd-logind[1445]: New session 8 of user core. Jan 30 13:54:40.407762 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:54:40.690247 sshd[5289]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:40.696716 systemd[1]: sshd@7-10.128.0.23:22-139.178.68.195:38328.service: Deactivated successfully. Jan 30 13:54:40.699764 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:54:40.700969 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:54:40.702734 systemd-logind[1445]: Removed session 8. Jan 30 13:54:45.742507 systemd[1]: Started sshd@8-10.128.0.23:22-139.178.68.195:47456.service - OpenSSH per-connection server daemon (139.178.68.195:47456). Jan 30 13:54:46.028032 sshd[5311]: Accepted publickey for core from 139.178.68.195 port 47456 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:54:46.029935 sshd[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:46.036907 systemd-logind[1445]: New session 9 of user core. Jan 30 13:54:46.042455 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:54:46.313608 sshd[5311]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:46.319480 systemd[1]: sshd@8-10.128.0.23:22-139.178.68.195:47456.service: Deactivated successfully. Jan 30 13:54:46.322272 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:54:46.323637 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:54:46.325280 systemd-logind[1445]: Removed session 9. Jan 30 13:54:51.368497 systemd[1]: Started sshd@9-10.128.0.23:22-139.178.68.195:47460.service - OpenSSH per-connection server daemon (139.178.68.195:47460). Jan 30 13:54:51.655183 sshd[5349]: Accepted publickey for core from 139.178.68.195 port 47460 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:54:51.656885 sshd[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:51.663375 systemd-logind[1445]: New session 10 of user core. Jan 30 13:54:51.669385 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:54:51.940365 sshd[5349]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:51.945291 systemd[1]: sshd@9-10.128.0.23:22-139.178.68.195:47460.service: Deactivated successfully. Jan 30 13:54:51.948763 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:54:51.950974 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:54:51.952625 systemd-logind[1445]: Removed session 10. Jan 30 13:54:52.000537 systemd[1]: Started sshd@10-10.128.0.23:22-139.178.68.195:47464.service - OpenSSH per-connection server daemon (139.178.68.195:47464). Jan 30 13:54:52.281789 sshd[5363]: Accepted publickey for core from 139.178.68.195 port 47464 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:54:52.283788 sshd[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:52.290410 systemd-logind[1445]: New session 11 of user core. Jan 30 13:54:52.295322 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:54:52.612655 sshd[5363]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:52.619428 systemd[1]: sshd@10-10.128.0.23:22-139.178.68.195:47464.service: Deactivated successfully. Jan 30 13:54:52.622363 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:54:52.624681 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:54:52.626241 systemd-logind[1445]: Removed session 11. Jan 30 13:54:52.670514 systemd[1]: Started sshd@11-10.128.0.23:22-139.178.68.195:47476.service - OpenSSH per-connection server daemon (139.178.68.195:47476). Jan 30 13:54:52.957811 sshd[5374]: Accepted publickey for core from 139.178.68.195 port 47476 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:54:52.960250 sshd[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:52.966960 systemd-logind[1445]: New session 12 of user core. Jan 30 13:54:52.972333 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:54:53.244834 sshd[5374]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:53.249724 systemd[1]: sshd@11-10.128.0.23:22-139.178.68.195:47476.service: Deactivated successfully. Jan 30 13:54:53.252805 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:54:53.255596 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:54:53.257224 systemd-logind[1445]: Removed session 12. Jan 30 13:54:58.298557 systemd[1]: Started sshd@12-10.128.0.23:22-139.178.68.195:45044.service - OpenSSH per-connection server daemon (139.178.68.195:45044). Jan 30 13:54:58.584200 sshd[5392]: Accepted publickey for core from 139.178.68.195 port 45044 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:54:58.587280 sshd[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:58.593250 systemd-logind[1445]: New session 13 of user core. Jan 30 13:54:58.599367 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:54:58.874818 sshd[5392]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:58.880842 systemd[1]: sshd@12-10.128.0.23:22-139.178.68.195:45044.service: Deactivated successfully. Jan 30 13:54:58.883672 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:54:58.884694 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:54:58.886460 systemd-logind[1445]: Removed session 13. Jan 30 13:55:03.931505 systemd[1]: Started sshd@13-10.128.0.23:22-139.178.68.195:45052.service - OpenSSH per-connection server daemon (139.178.68.195:45052). Jan 30 13:55:04.231573 sshd[5411]: Accepted publickey for core from 139.178.68.195 port 45052 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:55:04.233849 sshd[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:04.240115 systemd-logind[1445]: New session 14 of user core. Jan 30 13:55:04.244359 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:55:04.520266 sshd[5411]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:04.524978 systemd[1]: sshd@13-10.128.0.23:22-139.178.68.195:45052.service: Deactivated successfully. Jan 30 13:55:04.527982 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:55:04.530223 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:55:04.531965 systemd-logind[1445]: Removed session 14. Jan 30 13:55:06.110520 systemd[1]: run-containerd-runc-k8s.io-3939d29976779cb5947af6c83c1e9ecf58d62c3c54b6aecab25b4eb4380889e4-runc.oIydlM.mount: Deactivated successfully. Jan 30 13:55:09.576500 systemd[1]: Started sshd@14-10.128.0.23:22-139.178.68.195:36388.service - OpenSSH per-connection server daemon (139.178.68.195:36388). Jan 30 13:55:09.858484 sshd[5444]: Accepted publickey for core from 139.178.68.195 port 36388 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:55:09.860534 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:09.867228 systemd-logind[1445]: New session 15 of user core. Jan 30 13:55:09.872346 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:55:10.148806 sshd[5444]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:10.153866 systemd[1]: sshd@14-10.128.0.23:22-139.178.68.195:36388.service: Deactivated successfully. Jan 30 13:55:10.157203 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:55:10.159367 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:55:10.160856 systemd-logind[1445]: Removed session 15. Jan 30 13:55:15.203499 systemd[1]: Started sshd@15-10.128.0.23:22-139.178.68.195:46656.service - OpenSSH per-connection server daemon (139.178.68.195:46656). Jan 30 13:55:15.489225 sshd[5476]: Accepted publickey for core from 139.178.68.195 port 46656 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:55:15.491221 sshd[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:15.498254 systemd-logind[1445]: New session 16 of user core. Jan 30 13:55:15.504326 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:55:15.773485 sshd[5476]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:15.778511 systemd[1]: sshd@15-10.128.0.23:22-139.178.68.195:46656.service: Deactivated successfully. Jan 30 13:55:15.781445 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:55:15.783759 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:55:15.785476 systemd-logind[1445]: Removed session 16. Jan 30 13:55:15.827495 systemd[1]: Started sshd@16-10.128.0.23:22-139.178.68.195:46672.service - OpenSSH per-connection server daemon (139.178.68.195:46672). Jan 30 13:55:16.105192 sshd[5489]: Accepted publickey for core from 139.178.68.195 port 46672 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:55:16.107168 sshd[5489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:16.113104 systemd-logind[1445]: New session 17 of user core. Jan 30 13:55:16.122341 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:55:16.475969 sshd[5489]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:16.482140 systemd[1]: sshd@16-10.128.0.23:22-139.178.68.195:46672.service: Deactivated successfully. Jan 30 13:55:16.485216 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:55:16.486460 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:55:16.487978 systemd-logind[1445]: Removed session 17. Jan 30 13:55:16.530560 systemd[1]: Started sshd@17-10.128.0.23:22-139.178.68.195:46688.service - OpenSSH per-connection server daemon (139.178.68.195:46688). Jan 30 13:55:16.813964 sshd[5500]: Accepted publickey for core from 139.178.68.195 port 46688 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:55:16.815878 sshd[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:16.821505 systemd-logind[1445]: New session 18 of user core. Jan 30 13:55:16.830324 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:55:19.095671 sshd[5500]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:19.103419 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:55:19.104555 systemd[1]: sshd@17-10.128.0.23:22-139.178.68.195:46688.service: Deactivated successfully. Jan 30 13:55:19.111360 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:55:19.116983 systemd-logind[1445]: Removed session 18. Jan 30 13:55:19.152441 systemd[1]: Started sshd@18-10.128.0.23:22-139.178.68.195:46692.service - OpenSSH per-connection server daemon (139.178.68.195:46692). Jan 30 13:55:19.442305 sshd[5539]: Accepted publickey for core from 139.178.68.195 port 46692 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:55:19.443158 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:19.450488 systemd-logind[1445]: New session 19 of user core. Jan 30 13:55:19.457324 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:55:19.861967 sshd[5539]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:19.867820 systemd[1]: sshd@18-10.128.0.23:22-139.178.68.195:46692.service: Deactivated successfully. Jan 30 13:55:19.870961 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:55:19.872240 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:55:19.874000 systemd-logind[1445]: Removed session 19. Jan 30 13:55:19.919493 systemd[1]: Started sshd@19-10.128.0.23:22-139.178.68.195:46700.service - OpenSSH per-connection server daemon (139.178.68.195:46700). Jan 30 13:55:20.204246 sshd[5552]: Accepted publickey for core from 139.178.68.195 port 46700 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:55:20.206144 sshd[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:20.212852 systemd-logind[1445]: New session 20 of user core. Jan 30 13:55:20.218296 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:55:20.493608 sshd[5552]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:20.498593 systemd[1]: sshd@19-10.128.0.23:22-139.178.68.195:46700.service: Deactivated successfully. Jan 30 13:55:20.502013 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:55:20.504339 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:55:20.506068 systemd-logind[1445]: Removed session 20. Jan 30 13:55:25.548840 systemd[1]: Started sshd@20-10.128.0.23:22-139.178.68.195:35242.service - OpenSSH per-connection server daemon (139.178.68.195:35242). Jan 30 13:55:25.833776 sshd[5568]: Accepted publickey for core from 139.178.68.195 port 35242 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:55:25.835746 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:25.843181 systemd-logind[1445]: New session 21 of user core. Jan 30 13:55:25.845310 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:55:26.130258 sshd[5568]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:26.136008 systemd[1]: sshd@20-10.128.0.23:22-139.178.68.195:35242.service: Deactivated successfully. Jan 30 13:55:26.139111 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:55:26.140237 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:55:26.141649 systemd-logind[1445]: Removed session 21. Jan 30 13:55:31.185458 systemd[1]: Started sshd@21-10.128.0.23:22-139.178.68.195:35254.service - OpenSSH per-connection server daemon (139.178.68.195:35254). Jan 30 13:55:31.464139 sshd[5583]: Accepted publickey for core from 139.178.68.195 port 35254 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:55:31.465927 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:31.472653 systemd-logind[1445]: New session 22 of user core. Jan 30 13:55:31.479327 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:55:31.778346 sshd[5583]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:31.789388 systemd[1]: sshd@21-10.128.0.23:22-139.178.68.195:35254.service: Deactivated successfully. Jan 30 13:55:31.794823 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:55:31.797182 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:55:31.799520 systemd-logind[1445]: Removed session 22. Jan 30 13:55:36.832525 systemd[1]: Started sshd@22-10.128.0.23:22-139.178.68.195:56746.service - OpenSSH per-connection server daemon (139.178.68.195:56746). Jan 30 13:55:37.128456 sshd[5617]: Accepted publickey for core from 139.178.68.195 port 56746 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:55:37.130366 sshd[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:37.137422 systemd-logind[1445]: New session 23 of user core. Jan 30 13:55:37.144340 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:55:37.447291 sshd[5617]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:37.453351 systemd[1]: sshd@22-10.128.0.23:22-139.178.68.195:56746.service: Deactivated successfully. Jan 30 13:55:37.457017 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:55:37.458185 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:55:37.459774 systemd-logind[1445]: Removed session 23.