Jan 13 21:24:41.105682 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:24:41.105727 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:41.105746 kernel: BIOS-provided physical RAM map: Jan 13 21:24:41.105760 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 21:24:41.105773 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 21:24:41.105788 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 21:24:41.105805 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 21:24:41.105823 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 21:24:41.105838 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 13 21:24:41.105852 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 13 21:24:41.105867 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 13 21:24:41.105882 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 13 21:24:41.105896 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 21:24:41.105911 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 21:24:41.105933 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 21:24:41.105950 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 21:24:41.105966 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 21:24:41.105982 kernel: NX (Execute Disable) protection: active Jan 13 21:24:41.105998 kernel: APIC: Static calls initialized Jan 13 21:24:41.106014 kernel: efi: EFI v2.7 by EDK II Jan 13 21:24:41.106031 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 13 21:24:41.106047 kernel: SMBIOS 2.4 present. Jan 13 21:24:41.106063 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 21:24:41.106079 kernel: Hypervisor detected: KVM Jan 13 21:24:41.106099 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:24:41.106115 kernel: kvm-clock: using sched offset of 12721025685 cycles Jan 13 21:24:41.106132 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:24:41.106159 kernel: tsc: Detected 2299.998 MHz processor Jan 13 21:24:41.106176 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:24:41.106193 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:24:41.106210 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 21:24:41.106227 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 21:24:41.106243 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:24:41.106263 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 21:24:41.106279 kernel: Using GB pages for direct mapping Jan 13 21:24:41.106295 kernel: Secure boot disabled Jan 13 21:24:41.108210 kernel: ACPI: Early table checksum verification disabled Jan 13 21:24:41.108226 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 21:24:41.108243 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 21:24:41.108260 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 21:24:41.108287 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 21:24:41.108324 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 21:24:41.108341 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 21:24:41.108360 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 21:24:41.108377 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 21:24:41.108393 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 21:24:41.108409 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 21:24:41.108428 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 21:24:41.108444 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 21:24:41.108461 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 21:24:41.108475 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 21:24:41.108509 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 21:24:41.108528 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 21:24:41.108543 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 21:24:41.108558 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 21:24:41.108575 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 21:24:41.108597 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 21:24:41.108613 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:24:41.108628 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:24:41.108645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:24:41.108661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 21:24:41.108678 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 21:24:41.108696 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 21:24:41.108712 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 21:24:41.108729 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 13 21:24:41.108753 kernel: Zone ranges: Jan 13 21:24:41.108771 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:24:41.108789 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:24:41.108807 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:24:41.108826 kernel: Movable zone start for each node Jan 13 21:24:41.108844 kernel: Early memory node ranges Jan 13 21:24:41.108861 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 21:24:41.108880 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 21:24:41.108906 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 13 21:24:41.108928 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 21:24:41.108946 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:24:41.108964 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 21:24:41.108982 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:24:41.109000 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 21:24:41.109018 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 21:24:41.109037 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 21:24:41.109056 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 21:24:41.109074 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:24:41.109092 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:24:41.109115 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:24:41.109133 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:24:41.109152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:24:41.109170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:24:41.109186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:24:41.109221 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:24:41.109239 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:24:41.109257 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:24:41.109280 kernel: Booting paravirtualized kernel on KVM Jan 13 21:24:41.109319 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:24:41.109338 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:24:41.109356 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:24:41.109374 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:24:41.109391 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:24:41.109408 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:24:41.109426 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:24:41.109445 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:41.109466 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:24:41.109483 kernel: random: crng init done Jan 13 21:24:41.109507 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 21:24:41.109525 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:24:41.109542 kernel: Fallback order for Node 0: 0 Jan 13 21:24:41.109559 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 13 21:24:41.109576 kernel: Policy zone: Normal Jan 13 21:24:41.109593 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:24:41.109610 kernel: software IO TLB: area num 2. Jan 13 21:24:41.109632 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Jan 13 21:24:41.109649 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:24:41.109665 kernel: Kernel/User page tables isolation: enabled Jan 13 21:24:41.109683 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:24:41.109700 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:24:41.109716 kernel: Dynamic Preempt: voluntary Jan 13 21:24:41.109734 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:24:41.109752 kernel: rcu: RCU event tracing is enabled. Jan 13 21:24:41.109787 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:24:41.109805 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:24:41.109824 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:24:41.109845 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:24:41.109863 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:24:41.109882 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:24:41.109899 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:24:41.109918 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:24:41.109936 kernel: Console: colour dummy device 80x25 Jan 13 21:24:41.109958 kernel: printk: console [ttyS0] enabled Jan 13 21:24:41.109977 kernel: ACPI: Core revision 20230628 Jan 13 21:24:41.109994 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:24:41.110013 kernel: x2apic enabled Jan 13 21:24:41.110031 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:24:41.110049 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 21:24:41.110068 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:24:41.110086 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 21:24:41.110108 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 21:24:41.110127 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 21:24:41.110145 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:24:41.110164 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:24:41.110182 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:24:41.110200 kernel: Spectre V2 : Mitigation: IBRS Jan 13 21:24:41.110219 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:24:41.110236 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:24:41.110254 kernel: RETBleed: Mitigation: IBRS Jan 13 21:24:41.110277 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:24:41.110294 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 21:24:41.111741 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:24:41.111888 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 21:24:41.111908 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:24:41.111925 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:24:41.111943 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:24:41.111961 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:24:41.111980 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:24:41.112004 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 21:24:41.112026 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:24:41.112043 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:24:41.112061 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:24:41.112079 kernel: landlock: Up and running. Jan 13 21:24:41.112097 kernel: SELinux: Initializing. Jan 13 21:24:41.112115 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:24:41.112132 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:24:41.112149 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 21:24:41.112173 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:24:41.112193 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:24:41.112211 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:24:41.112232 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 21:24:41.112248 kernel: signal: max sigframe size: 1776 Jan 13 21:24:41.112265 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:24:41.112284 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:24:41.112332 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:24:41.112351 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:24:41.112376 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:24:41.112394 kernel: .... node #0, CPUs: #1 Jan 13 21:24:41.112411 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:24:41.112429 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:24:41.112446 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:24:41.112464 kernel: smpboot: Max logical packages: 1 Jan 13 21:24:41.112483 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 21:24:41.112510 kernel: devtmpfs: initialized Jan 13 21:24:41.112535 kernel: x86/mm: Memory block size: 128MB Jan 13 21:24:41.112554 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 21:24:41.112572 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:24:41.112591 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:24:41.112609 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:24:41.112627 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:24:41.112647 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:24:41.112665 kernel: audit: type=2000 audit(1736803479.381:1): state=initialized audit_enabled=0 res=1 Jan 13 21:24:41.112683 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:24:41.112705 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:24:41.112723 kernel: cpuidle: using governor menu Jan 13 21:24:41.112741 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:24:41.112759 kernel: dca service started, version 1.12.1 Jan 13 21:24:41.112777 kernel: PCI: Using configuration type 1 for base access Jan 13 21:24:41.112796 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:24:41.112814 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:24:41.112831 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:24:41.112850 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:24:41.112872 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:24:41.112889 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:24:41.112907 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:24:41.112925 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:24:41.112944 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:24:41.112974 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:24:41.112991 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:24:41.113010 kernel: ACPI: Interpreter enabled Jan 13 21:24:41.113029 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:24:41.113056 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:24:41.113453 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:24:41.113477 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 21:24:41.113505 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:24:41.113524 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:24:41.113779 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:24:41.113981 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:24:41.114179 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:24:41.114203 kernel: PCI host bridge to bus 0000:00 Jan 13 21:24:41.115648 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:24:41.115832 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:24:41.115999 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:24:41.116163 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 21:24:41.116622 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:24:41.117107 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:24:41.117646 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 21:24:41.117864 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:24:41.118062 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:24:41.118255 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 21:24:41.118523 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 21:24:41.118720 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 21:24:41.118913 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:24:41.119100 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 21:24:41.119285 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 21:24:41.119543 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:24:41.119732 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 21:24:41.119918 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 21:24:41.119949 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:24:41.119969 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:24:41.119989 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:24:41.120008 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:24:41.120028 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:24:41.120048 kernel: iommu: Default domain type: Translated Jan 13 21:24:41.120067 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:24:41.120086 kernel: efivars: Registered efivars operations Jan 13 21:24:41.120105 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:24:41.120128 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:24:41.120147 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 21:24:41.120167 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 21:24:41.120185 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 21:24:41.120204 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 21:24:41.120223 kernel: vgaarb: loaded Jan 13 21:24:41.120242 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:24:41.120259 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:24:41.120279 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:24:41.121346 kernel: pnp: PnP ACPI init Jan 13 21:24:41.121370 kernel: pnp: PnP ACPI: found 7 devices Jan 13 21:24:41.121388 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:24:41.121406 kernel: NET: Registered PF_INET protocol family Jan 13 21:24:41.121423 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:24:41.121441 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 21:24:41.121458 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:24:41.121476 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:24:41.121494 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 21:24:41.121526 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 21:24:41.121543 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:24:41.121561 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:24:41.121579 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:24:41.121596 kernel: NET: Registered PF_XDP protocol family Jan 13 21:24:41.121796 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:24:41.121959 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:24:41.122119 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:24:41.122285 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 21:24:41.123543 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:24:41.123574 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:24:41.123595 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:24:41.123615 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 21:24:41.123635 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:24:41.123655 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:24:41.123675 kernel: clocksource: Switched to clocksource tsc Jan 13 21:24:41.123709 kernel: Initialise system trusted keyrings Jan 13 21:24:41.123728 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 21:24:41.123747 kernel: Key type asymmetric registered Jan 13 21:24:41.123767 kernel: Asymmetric key parser 'x509' registered Jan 13 21:24:41.123786 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:24:41.123805 kernel: io scheduler mq-deadline registered Jan 13 21:24:41.123825 kernel: io scheduler kyber registered Jan 13 21:24:41.123850 kernel: io scheduler bfq registered Jan 13 21:24:41.123869 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:24:41.123895 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:24:41.124086 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 21:24:41.124112 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 21:24:41.126384 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 21:24:41.126423 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:24:41.126716 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 21:24:41.126741 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:24:41.126761 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:24:41.126789 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 21:24:41.126820 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 21:24:41.126839 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 21:24:41.127104 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 21:24:41.127134 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:24:41.127159 kernel: i8042: Warning: Keylock active Jan 13 21:24:41.127178 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:24:41.127196 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:24:41.127439 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:24:41.127630 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:24:41.127815 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:24:40 UTC (1736803480) Jan 13 21:24:41.127984 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:24:41.128008 kernel: intel_pstate: CPU model not supported Jan 13 21:24:41.128029 kernel: pstore: Using crash dump compression: deflate Jan 13 21:24:41.128049 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:24:41.128068 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:24:41.128087 kernel: Segment Routing with IPv6 Jan 13 21:24:41.128111 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:24:41.128131 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:24:41.128149 kernel: Key type dns_resolver registered Jan 13 21:24:41.128169 kernel: IPI shorthand broadcast: enabled Jan 13 21:24:41.128188 kernel: sched_clock: Marking stable (865004779, 156639669)->(1073588715, -51944267) Jan 13 21:24:41.128208 kernel: registered taskstats version 1 Jan 13 21:24:41.128227 kernel: Loading compiled-in X.509 certificates Jan 13 21:24:41.128246 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:24:41.128264 kernel: Key type .fscrypt registered Jan 13 21:24:41.128287 kernel: Key type fscrypt-provisioning registered Jan 13 21:24:41.130342 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:24:41.130367 kernel: ima: No architecture policies found Jan 13 21:24:41.130388 kernel: clk: Disabling unused clocks Jan 13 21:24:41.130408 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:24:41.130428 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:24:41.130448 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:24:41.130468 kernel: Run /init as init process Jan 13 21:24:41.130501 kernel: with arguments: Jan 13 21:24:41.130521 kernel: /init Jan 13 21:24:41.130540 kernel: with environment: Jan 13 21:24:41.130559 kernel: HOME=/ Jan 13 21:24:41.130578 kernel: TERM=linux Jan 13 21:24:41.130598 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:24:41.130618 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:24:41.130687 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:24:41.130717 systemd[1]: Detected virtualization google. Jan 13 21:24:41.130745 systemd[1]: Detected architecture x86-64. Jan 13 21:24:41.130765 systemd[1]: Running in initrd. Jan 13 21:24:41.130786 systemd[1]: No hostname configured, using default hostname. Jan 13 21:24:41.130806 systemd[1]: Hostname set to . Jan 13 21:24:41.130828 systemd[1]: Initializing machine ID from random generator. Jan 13 21:24:41.130849 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:24:41.130870 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:24:41.130895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:24:41.130916 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:24:41.130937 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:24:41.130958 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:24:41.130979 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:24:41.131003 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:24:41.131025 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:24:41.131049 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:24:41.131070 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:24:41.131111 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:24:41.131144 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:24:41.131166 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:24:41.131188 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:24:41.131213 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:24:41.131234 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:24:41.131255 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:24:41.131278 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:24:41.131397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:24:41.131421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:24:41.131442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:24:41.131463 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:24:41.131484 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:24:41.131518 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:24:41.131539 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:24:41.131560 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:24:41.131581 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:24:41.131602 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:24:41.131623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:41.131645 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:24:41.131704 systemd-journald[183]: Collecting audit messages is disabled. Jan 13 21:24:41.131755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:24:41.131783 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:24:41.131811 systemd-journald[183]: Journal started Jan 13 21:24:41.131853 systemd-journald[183]: Runtime Journal (/run/log/journal/5fb2b5a0b6c44e0a92cca17ccad1728b) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:24:41.133708 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:24:41.108549 systemd-modules-load[184]: Inserted module 'overlay' Jan 13 21:24:41.138428 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:24:41.165344 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:24:41.168461 kernel: Bridge firewalling registered Jan 13 21:24:41.167606 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 13 21:24:41.167747 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:24:41.174055 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:24:41.187408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:41.195987 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:24:41.200125 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:24:41.213586 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:41.224681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:24:41.227801 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:24:41.256607 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:41.266748 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:24:41.268172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:24:41.274917 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:41.285858 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:24:41.322688 dracut-cmdline[216]: dracut-dracut-053 Jan 13 21:24:41.328211 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:41.329777 systemd-resolved[209]: Positive Trust Anchors: Jan 13 21:24:41.329914 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:24:41.329985 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:24:41.336787 systemd-resolved[209]: Defaulting to hostname 'linux'. Jan 13 21:24:41.338886 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:24:41.344610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:24:41.438368 kernel: SCSI subsystem initialized Jan 13 21:24:41.449362 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:24:41.462368 kernel: iscsi: registered transport (tcp) Jan 13 21:24:41.487516 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:24:41.487655 kernel: QLogic iSCSI HBA Driver Jan 13 21:24:41.546900 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:24:41.553601 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:24:41.601370 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:24:41.601499 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:24:41.603485 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:24:41.651396 kernel: raid6: avx2x4 gen() 24090 MB/s Jan 13 21:24:41.668397 kernel: raid6: avx2x2 gen() 23441 MB/s Jan 13 21:24:41.685828 kernel: raid6: avx2x1 gen() 20915 MB/s Jan 13 21:24:41.685902 kernel: raid6: using algorithm avx2x4 gen() 24090 MB/s Jan 13 21:24:41.703884 kernel: raid6: .... xor() 5871 MB/s, rmw enabled Jan 13 21:24:41.704004 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:24:41.729384 kernel: xor: automatically using best checksumming function avx Jan 13 21:24:41.906391 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:24:41.921363 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:24:41.932616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:24:41.951175 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 13 21:24:41.958576 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:24:41.967615 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:24:42.003134 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 13 21:24:42.044943 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:24:42.058595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:24:42.145799 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:24:42.156160 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:24:42.199254 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:24:42.223634 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:24:42.247512 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:24:42.272810 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:24:42.304119 kernel: scsi host0: Virtio SCSI HBA Jan 13 21:24:42.305834 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:24:42.298980 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:24:42.385895 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:24:42.387801 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:42.415403 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:24:42.416735 kernel: AES CTR mode by8 optimization enabled Jan 13 21:24:42.416777 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 21:24:42.435218 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:42.456322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:24:42.456979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:42.502272 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 21:24:42.547812 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 21:24:42.548649 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 21:24:42.548924 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 21:24:42.549193 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:24:42.549464 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:24:42.549493 kernel: GPT:17805311 != 25165823 Jan 13 21:24:42.549524 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:24:42.549547 kernel: GPT:17805311 != 25165823 Jan 13 21:24:42.549570 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:24:42.549595 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:24:42.549622 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 21:24:42.528418 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:42.564015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:42.588667 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:24:42.631510 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (461) Jan 13 21:24:42.631559 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (457) Jan 13 21:24:42.639814 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 21:24:42.664931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:42.692098 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 21:24:42.699476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:24:42.728712 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 21:24:42.743654 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 21:24:42.773625 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:24:42.801719 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:42.818689 disk-uuid[539]: Primary Header is updated. Jan 13 21:24:42.818689 disk-uuid[539]: Secondary Entries is updated. Jan 13 21:24:42.818689 disk-uuid[539]: Secondary Header is updated. Jan 13 21:24:42.852529 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:24:42.869342 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:24:42.884279 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:42.910530 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:24:43.888364 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:24:43.889283 disk-uuid[540]: The operation has completed successfully. Jan 13 21:24:43.972326 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:24:43.972516 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:24:44.003588 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:24:44.038503 sh[565]: Success Jan 13 21:24:44.065350 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:24:44.167160 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:24:44.175458 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:24:44.203023 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:24:44.242363 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:24:44.242484 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:44.259750 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:24:44.259878 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:24:44.266586 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:24:44.305339 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:24:44.386610 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:24:44.387694 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:24:44.393577 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:24:44.404509 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:24:44.474419 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:44.474510 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:44.474540 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:24:44.493613 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:24:44.493691 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:24:44.510181 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:24:44.528508 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:44.540240 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:24:44.563716 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:24:44.602364 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:24:44.624031 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:24:44.686026 systemd-networkd[747]: lo: Link UP Jan 13 21:24:44.686501 systemd-networkd[747]: lo: Gained carrier Jan 13 21:24:44.688761 systemd-networkd[747]: Enumeration completed Jan 13 21:24:44.689492 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:24:44.689909 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:44.689916 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:24:44.693095 systemd-networkd[747]: eth0: Link UP Jan 13 21:24:44.770177 ignition[708]: Ignition 2.19.0 Jan 13 21:24:44.693101 systemd-networkd[747]: eth0: Gained carrier Jan 13 21:24:44.770198 ignition[708]: Stage: fetch-offline Jan 13 21:24:44.693114 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:44.770246 ignition[708]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:44.712592 systemd-networkd[747]: eth0: DHCPv4 address 10.128.0.101/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:24:44.770257 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:44.728692 systemd[1]: Reached target network.target - Network. Jan 13 21:24:44.770450 ignition[708]: parsed url from cmdline: "" Jan 13 21:24:44.772563 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:24:44.770457 ignition[708]: no config URL provided Jan 13 21:24:44.795534 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:24:44.770464 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:24:44.832715 unknown[757]: fetched base config from "system" Jan 13 21:24:44.770474 ignition[708]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:24:44.832741 unknown[757]: fetched base config from "system" Jan 13 21:24:44.770483 ignition[708]: failed to fetch config: resource requires networking Jan 13 21:24:44.832752 unknown[757]: fetched user config from "gcp" Jan 13 21:24:44.770702 ignition[708]: Ignition finished successfully Jan 13 21:24:44.854883 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:24:44.821359 ignition[757]: Ignition 2.19.0 Jan 13 21:24:44.881520 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:24:44.821372 ignition[757]: Stage: fetch Jan 13 21:24:44.905278 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:24:44.821675 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:44.923537 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:24:44.821693 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:44.975612 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:24:44.821854 ignition[757]: parsed url from cmdline: "" Jan 13 21:24:44.976766 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:24:44.821861 ignition[757]: no config URL provided Jan 13 21:24:44.990637 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:24:44.821871 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:24:45.016641 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:24:44.821888 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:24:45.024655 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:24:44.821933 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 21:24:45.038678 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:24:44.825536 ignition[757]: GET result: OK Jan 13 21:24:45.062622 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:24:44.825688 ignition[757]: parsing config with SHA512: 7251ec6fa1bdaa872258df72ea5c6df2eceee305c535a8d423359f84a5c0638b47b4ea9d6f418b55abec5380e123e881b4b403279fb102c134113f9c56db23f7 Jan 13 21:24:44.833756 ignition[757]: fetch: fetch complete Jan 13 21:24:44.834693 ignition[757]: fetch: fetch passed Jan 13 21:24:44.834767 ignition[757]: Ignition finished successfully Jan 13 21:24:44.902848 ignition[764]: Ignition 2.19.0 Jan 13 21:24:44.902857 ignition[764]: Stage: kargs Jan 13 21:24:44.903055 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:44.903068 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:44.904093 ignition[764]: kargs: kargs passed Jan 13 21:24:44.904157 ignition[764]: Ignition finished successfully Jan 13 21:24:44.973206 ignition[769]: Ignition 2.19.0 Jan 13 21:24:44.973218 ignition[769]: Stage: disks Jan 13 21:24:44.973462 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:44.973475 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:44.974500 ignition[769]: disks: disks passed Jan 13 21:24:44.974559 ignition[769]: Ignition finished successfully Jan 13 21:24:45.115682 systemd-fsck[778]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:24:45.285481 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:24:45.290478 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:24:45.446347 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:24:45.447797 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:24:45.448993 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:24:45.479508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:24:45.490583 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:24:45.514925 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:24:45.515037 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:24:45.601720 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (786) Jan 13 21:24:45.601774 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:45.601793 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:45.601808 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:24:45.601824 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:24:45.601839 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:24:45.515088 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:24:45.572755 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:24:45.612479 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:24:45.636696 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:24:45.768585 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:24:45.779570 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:24:45.789559 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:24:45.799501 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:24:45.948873 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:24:45.954470 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:24:45.995368 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:45.999628 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:24:46.009901 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:24:46.042805 ignition[898]: INFO : Ignition 2.19.0 Jan 13 21:24:46.042805 ignition[898]: INFO : Stage: mount Jan 13 21:24:46.057652 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:46.057652 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:46.057652 ignition[898]: INFO : mount: mount passed Jan 13 21:24:46.057652 ignition[898]: INFO : Ignition finished successfully Jan 13 21:24:46.046149 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:24:46.078154 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:24:46.111957 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:24:46.370561 systemd-networkd[747]: eth0: Gained IPv6LL Jan 13 21:24:46.460610 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:24:46.484333 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (910) Jan 13 21:24:46.502340 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:46.502428 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:46.502454 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:24:46.526029 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:24:46.526115 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:24:46.529274 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:24:46.566836 ignition[927]: INFO : Ignition 2.19.0 Jan 13 21:24:46.566836 ignition[927]: INFO : Stage: files Jan 13 21:24:46.581456 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:46.581456 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:46.581456 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:24:46.581456 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:24:46.581456 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:24:46.581456 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:24:46.581456 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:24:46.581456 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:24:46.581456 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:24:46.581456 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:24:46.578998 unknown[927]: wrote ssh authorized keys file for user: core Jan 13 21:24:46.717462 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:24:46.855370 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:24:47.167046 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:24:47.560652 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:47.560652 ignition[927]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:24:47.599484 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:24:47.599484 ignition[927]: INFO : files: files passed Jan 13 21:24:47.599484 ignition[927]: INFO : Ignition finished successfully Jan 13 21:24:47.566018 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:24:47.585548 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:24:47.622536 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:24:47.665047 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:24:47.809598 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:47.809598 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:47.665164 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:24:47.875511 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:47.686019 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:24:47.720908 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:24:47.743520 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:24:47.804214 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:24:47.804365 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:24:47.820660 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:24:47.844515 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:24:47.865618 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:24:47.872512 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:24:47.920371 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:24:47.939512 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:24:47.985997 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:24:48.006663 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:24:48.030771 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:24:48.049676 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:24:48.049880 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:24:48.082820 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:24:48.102738 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:24:48.120741 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:24:48.138655 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:24:48.157703 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:24:48.179723 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:24:48.199660 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:24:48.219743 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:24:48.239789 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:24:48.259731 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:24:48.277625 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:24:48.277861 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:24:48.308787 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:24:48.326658 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:24:48.347673 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:24:48.347840 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:24:48.365602 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:24:48.365827 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:24:48.486516 ignition[980]: INFO : Ignition 2.19.0 Jan 13 21:24:48.486516 ignition[980]: INFO : Stage: umount Jan 13 21:24:48.486516 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:48.486516 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:48.486516 ignition[980]: INFO : umount: umount passed Jan 13 21:24:48.486516 ignition[980]: INFO : Ignition finished successfully Jan 13 21:24:48.394722 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:24:48.394958 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:24:48.415818 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:24:48.416028 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:24:48.443603 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:24:48.476462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:24:48.476910 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:24:48.504647 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:24:48.519444 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:24:48.519715 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:24:48.531778 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:24:48.532004 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:24:48.573343 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:24:48.574295 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:24:48.574447 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:24:48.589257 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:24:48.589414 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:24:48.611164 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:24:48.611278 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:24:48.633152 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:24:48.633225 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:24:48.650607 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:24:48.650703 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:24:48.668593 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:24:48.668726 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:24:48.688581 systemd[1]: Stopped target network.target - Network. Jan 13 21:24:48.706482 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:24:48.706612 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:24:48.728576 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:24:48.745475 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:24:48.747432 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:24:48.766479 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:24:48.783489 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:24:48.800565 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:24:48.800660 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:24:48.822080 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:24:48.822173 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:24:48.840526 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:24:48.840629 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:24:48.858564 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:24:48.858665 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:24:48.876572 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:24:48.876723 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:24:48.894851 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:24:48.897377 systemd-networkd[747]: eth0: DHCPv6 lease lost Jan 13 21:24:48.913801 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:24:48.932998 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:24:48.933133 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:24:48.945383 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:24:48.945642 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:24:48.972273 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:24:48.972399 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:24:48.985438 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:24:49.022437 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:24:49.484545 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 13 21:24:49.022561 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:24:49.040592 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:24:49.040702 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:49.058687 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:24:49.058770 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:24:49.068729 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:24:49.068802 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:24:49.086844 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:24:49.116005 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:24:49.116186 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:24:49.141649 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:24:49.141805 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:24:49.159557 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:24:49.159637 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:24:49.176674 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:24:49.176767 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:24:49.212643 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:24:49.212887 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:24:49.238708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:24:49.238796 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:49.288541 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:24:49.302432 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:24:49.302555 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:24:49.313534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:24:49.313619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:49.325066 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:24:49.325189 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:24:49.335060 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:24:49.335262 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:24:49.363243 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:24:49.386533 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:24:49.425044 systemd[1]: Switching root. Jan 13 21:24:49.806537 systemd-journald[183]: Journal stopped Jan 13 21:24:41.105682 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:24:41.105727 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:41.105746 kernel: BIOS-provided physical RAM map: Jan 13 21:24:41.105760 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 21:24:41.105773 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 21:24:41.105788 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 21:24:41.105805 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 21:24:41.105823 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 21:24:41.105838 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 13 21:24:41.105852 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 13 21:24:41.105867 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 13 21:24:41.105882 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 13 21:24:41.105896 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 21:24:41.105911 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 21:24:41.105933 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 21:24:41.105950 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 21:24:41.105966 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 21:24:41.105982 kernel: NX (Execute Disable) protection: active Jan 13 21:24:41.105998 kernel: APIC: Static calls initialized Jan 13 21:24:41.106014 kernel: efi: EFI v2.7 by EDK II Jan 13 21:24:41.106031 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 13 21:24:41.106047 kernel: SMBIOS 2.4 present. Jan 13 21:24:41.106063 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 21:24:41.106079 kernel: Hypervisor detected: KVM Jan 13 21:24:41.106099 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:24:41.106115 kernel: kvm-clock: using sched offset of 12721025685 cycles Jan 13 21:24:41.106132 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:24:41.106159 kernel: tsc: Detected 2299.998 MHz processor Jan 13 21:24:41.106176 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:24:41.106193 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:24:41.106210 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 21:24:41.106227 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 21:24:41.106243 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:24:41.106263 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 21:24:41.106279 kernel: Using GB pages for direct mapping Jan 13 21:24:41.106295 kernel: Secure boot disabled Jan 13 21:24:41.108210 kernel: ACPI: Early table checksum verification disabled Jan 13 21:24:41.108226 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 21:24:41.108243 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 21:24:41.108260 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 21:24:41.108287 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 21:24:41.108324 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 21:24:41.108341 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 21:24:41.108360 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 21:24:41.108377 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 21:24:41.108393 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 21:24:41.108409 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 21:24:41.108428 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 21:24:41.108444 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 21:24:41.108461 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 21:24:41.108475 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 21:24:41.108509 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 21:24:41.108528 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 21:24:41.108543 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 21:24:41.108558 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 21:24:41.108575 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 21:24:41.108597 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 21:24:41.108613 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:24:41.108628 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:24:41.108645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:24:41.108661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 21:24:41.108678 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 21:24:41.108696 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 21:24:41.108712 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 21:24:41.108729 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 13 21:24:41.108753 kernel: Zone ranges: Jan 13 21:24:41.108771 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:24:41.108789 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:24:41.108807 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:24:41.108826 kernel: Movable zone start for each node Jan 13 21:24:41.108844 kernel: Early memory node ranges Jan 13 21:24:41.108861 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 21:24:41.108880 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 21:24:41.108906 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 13 21:24:41.108928 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 21:24:41.108946 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:24:41.108964 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 21:24:41.108982 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:24:41.109000 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 21:24:41.109018 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 21:24:41.109037 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 21:24:41.109056 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 21:24:41.109074 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:24:41.109092 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:24:41.109115 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:24:41.109133 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:24:41.109152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:24:41.109170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:24:41.109186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:24:41.109221 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:24:41.109239 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:24:41.109257 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:24:41.109280 kernel: Booting paravirtualized kernel on KVM Jan 13 21:24:41.109319 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:24:41.109338 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:24:41.109356 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:24:41.109374 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:24:41.109391 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:24:41.109408 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:24:41.109426 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:24:41.109445 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:41.109466 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:24:41.109483 kernel: random: crng init done Jan 13 21:24:41.109507 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 21:24:41.109525 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:24:41.109542 kernel: Fallback order for Node 0: 0 Jan 13 21:24:41.109559 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 13 21:24:41.109576 kernel: Policy zone: Normal Jan 13 21:24:41.109593 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:24:41.109610 kernel: software IO TLB: area num 2. Jan 13 21:24:41.109632 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Jan 13 21:24:41.109649 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:24:41.109665 kernel: Kernel/User page tables isolation: enabled Jan 13 21:24:41.109683 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:24:41.109700 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:24:41.109716 kernel: Dynamic Preempt: voluntary Jan 13 21:24:41.109734 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:24:41.109752 kernel: rcu: RCU event tracing is enabled. Jan 13 21:24:41.109787 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:24:41.109805 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:24:41.109824 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:24:41.109845 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:24:41.109863 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:24:41.109882 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:24:41.109899 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:24:41.109918 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:24:41.109936 kernel: Console: colour dummy device 80x25 Jan 13 21:24:41.109958 kernel: printk: console [ttyS0] enabled Jan 13 21:24:41.109977 kernel: ACPI: Core revision 20230628 Jan 13 21:24:41.109994 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:24:41.110013 kernel: x2apic enabled Jan 13 21:24:41.110031 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:24:41.110049 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 21:24:41.110068 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:24:41.110086 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 21:24:41.110108 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 21:24:41.110127 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 21:24:41.110145 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:24:41.110164 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:24:41.110182 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:24:41.110200 kernel: Spectre V2 : Mitigation: IBRS Jan 13 21:24:41.110219 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:24:41.110236 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:24:41.110254 kernel: RETBleed: Mitigation: IBRS Jan 13 21:24:41.110277 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:24:41.110294 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 21:24:41.111741 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:24:41.111888 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 21:24:41.111908 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:24:41.111925 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:24:41.111943 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:24:41.111961 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:24:41.111980 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:24:41.112004 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 21:24:41.112026 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:24:41.112043 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:24:41.112061 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:24:41.112079 kernel: landlock: Up and running. Jan 13 21:24:41.112097 kernel: SELinux: Initializing. Jan 13 21:24:41.112115 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:24:41.112132 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:24:41.112149 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 21:24:41.112173 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:24:41.112193 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:24:41.112211 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:24:41.112232 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 21:24:41.112248 kernel: signal: max sigframe size: 1776 Jan 13 21:24:41.112265 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:24:41.112284 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:24:41.112332 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:24:41.112351 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:24:41.112376 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:24:41.112394 kernel: .... node #0, CPUs: #1 Jan 13 21:24:41.112411 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:24:41.112429 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:24:41.112446 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:24:41.112464 kernel: smpboot: Max logical packages: 1 Jan 13 21:24:41.112483 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 21:24:41.112510 kernel: devtmpfs: initialized Jan 13 21:24:41.112535 kernel: x86/mm: Memory block size: 128MB Jan 13 21:24:41.112554 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 21:24:41.112572 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:24:41.112591 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:24:41.112609 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:24:41.112627 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:24:41.112647 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:24:41.112665 kernel: audit: type=2000 audit(1736803479.381:1): state=initialized audit_enabled=0 res=1 Jan 13 21:24:41.112683 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:24:41.112705 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:24:41.112723 kernel: cpuidle: using governor menu Jan 13 21:24:41.112741 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:24:41.112759 kernel: dca service started, version 1.12.1 Jan 13 21:24:41.112777 kernel: PCI: Using configuration type 1 for base access Jan 13 21:24:41.112796 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:24:41.112814 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:24:41.112831 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:24:41.112850 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:24:41.112872 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:24:41.112889 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:24:41.112907 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:24:41.112925 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:24:41.112944 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:24:41.112974 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:24:41.112991 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:24:41.113010 kernel: ACPI: Interpreter enabled Jan 13 21:24:41.113029 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:24:41.113056 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:24:41.113453 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:24:41.113477 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 21:24:41.113505 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:24:41.113524 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:24:41.113779 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:24:41.113981 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:24:41.114179 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:24:41.114203 kernel: PCI host bridge to bus 0000:00 Jan 13 21:24:41.115648 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:24:41.115832 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:24:41.115999 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:24:41.116163 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 21:24:41.116622 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:24:41.117107 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:24:41.117646 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 21:24:41.117864 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:24:41.118062 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:24:41.118255 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 21:24:41.118523 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 21:24:41.118720 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 21:24:41.118913 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:24:41.119100 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 21:24:41.119285 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 21:24:41.119543 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:24:41.119732 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 21:24:41.119918 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 21:24:41.119949 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:24:41.119969 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:24:41.119989 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:24:41.120008 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:24:41.120028 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:24:41.120048 kernel: iommu: Default domain type: Translated Jan 13 21:24:41.120067 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:24:41.120086 kernel: efivars: Registered efivars operations Jan 13 21:24:41.120105 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:24:41.120128 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:24:41.120147 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 21:24:41.120167 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 21:24:41.120185 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 21:24:41.120204 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 21:24:41.120223 kernel: vgaarb: loaded Jan 13 21:24:41.120242 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:24:41.120259 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:24:41.120279 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:24:41.121346 kernel: pnp: PnP ACPI init Jan 13 21:24:41.121370 kernel: pnp: PnP ACPI: found 7 devices Jan 13 21:24:41.121388 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:24:41.121406 kernel: NET: Registered PF_INET protocol family Jan 13 21:24:41.121423 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:24:41.121441 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 21:24:41.121458 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:24:41.121476 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:24:41.121494 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 21:24:41.121526 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 21:24:41.121543 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:24:41.121561 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:24:41.121579 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:24:41.121596 kernel: NET: Registered PF_XDP protocol family Jan 13 21:24:41.121796 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:24:41.121959 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:24:41.122119 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:24:41.122285 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 21:24:41.123543 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:24:41.123574 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:24:41.123595 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:24:41.123615 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 21:24:41.123635 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:24:41.123655 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:24:41.123675 kernel: clocksource: Switched to clocksource tsc Jan 13 21:24:41.123709 kernel: Initialise system trusted keyrings Jan 13 21:24:41.123728 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 21:24:41.123747 kernel: Key type asymmetric registered Jan 13 21:24:41.123767 kernel: Asymmetric key parser 'x509' registered Jan 13 21:24:41.123786 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:24:41.123805 kernel: io scheduler mq-deadline registered Jan 13 21:24:41.123825 kernel: io scheduler kyber registered Jan 13 21:24:41.123850 kernel: io scheduler bfq registered Jan 13 21:24:41.123869 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:24:41.123895 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:24:41.124086 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 21:24:41.124112 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 21:24:41.126384 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 21:24:41.126423 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:24:41.126716 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 21:24:41.126741 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:24:41.126761 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:24:41.126789 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 21:24:41.126820 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 21:24:41.126839 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 21:24:41.127104 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 21:24:41.127134 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:24:41.127159 kernel: i8042: Warning: Keylock active Jan 13 21:24:41.127178 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:24:41.127196 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:24:41.127439 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:24:41.127630 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:24:41.127815 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:24:40 UTC (1736803480) Jan 13 21:24:41.127984 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:24:41.128008 kernel: intel_pstate: CPU model not supported Jan 13 21:24:41.128029 kernel: pstore: Using crash dump compression: deflate Jan 13 21:24:41.128049 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:24:41.128068 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:24:41.128087 kernel: Segment Routing with IPv6 Jan 13 21:24:41.128111 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:24:41.128131 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:24:41.128149 kernel: Key type dns_resolver registered Jan 13 21:24:41.128169 kernel: IPI shorthand broadcast: enabled Jan 13 21:24:41.128188 kernel: sched_clock: Marking stable (865004779, 156639669)->(1073588715, -51944267) Jan 13 21:24:41.128208 kernel: registered taskstats version 1 Jan 13 21:24:41.128227 kernel: Loading compiled-in X.509 certificates Jan 13 21:24:41.128246 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:24:41.128264 kernel: Key type .fscrypt registered Jan 13 21:24:41.128287 kernel: Key type fscrypt-provisioning registered Jan 13 21:24:41.130342 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:24:41.130367 kernel: ima: No architecture policies found Jan 13 21:24:41.130388 kernel: clk: Disabling unused clocks Jan 13 21:24:41.130408 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:24:41.130428 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:24:41.130448 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:24:41.130468 kernel: Run /init as init process Jan 13 21:24:41.130501 kernel: with arguments: Jan 13 21:24:41.130521 kernel: /init Jan 13 21:24:41.130540 kernel: with environment: Jan 13 21:24:41.130559 kernel: HOME=/ Jan 13 21:24:41.130578 kernel: TERM=linux Jan 13 21:24:41.130598 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:24:41.130618 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:24:41.130687 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:24:41.130717 systemd[1]: Detected virtualization google. Jan 13 21:24:41.130745 systemd[1]: Detected architecture x86-64. Jan 13 21:24:41.130765 systemd[1]: Running in initrd. Jan 13 21:24:41.130786 systemd[1]: No hostname configured, using default hostname. Jan 13 21:24:41.130806 systemd[1]: Hostname set to . Jan 13 21:24:41.130828 systemd[1]: Initializing machine ID from random generator. Jan 13 21:24:41.130849 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:24:41.130870 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:24:41.130895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:24:41.130916 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:24:41.130937 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:24:41.130958 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:24:41.130979 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:24:41.131003 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:24:41.131025 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:24:41.131049 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:24:41.131070 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:24:41.131111 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:24:41.131144 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:24:41.131166 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:24:41.131188 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:24:41.131213 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:24:41.131234 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:24:41.131255 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:24:41.131278 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:24:41.131397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:24:41.131421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:24:41.131442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:24:41.131463 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:24:41.131484 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:24:41.131518 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:24:41.131539 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:24:41.131560 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:24:41.131581 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:24:41.131602 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:24:41.131623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:41.131645 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:24:41.131704 systemd-journald[183]: Collecting audit messages is disabled. Jan 13 21:24:41.131755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:24:41.131783 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:24:41.131811 systemd-journald[183]: Journal started Jan 13 21:24:41.131853 systemd-journald[183]: Runtime Journal (/run/log/journal/5fb2b5a0b6c44e0a92cca17ccad1728b) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:24:41.133708 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:24:41.108549 systemd-modules-load[184]: Inserted module 'overlay' Jan 13 21:24:41.138428 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:24:41.165344 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:24:41.168461 kernel: Bridge firewalling registered Jan 13 21:24:41.167606 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 13 21:24:41.167747 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:24:41.174055 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:24:41.187408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:41.195987 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:24:41.200125 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:24:41.213586 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:41.224681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:24:41.227801 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:24:41.256607 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:41.266748 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:24:41.268172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:24:41.274917 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:41.285858 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:24:41.322688 dracut-cmdline[216]: dracut-dracut-053 Jan 13 21:24:41.328211 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:41.329777 systemd-resolved[209]: Positive Trust Anchors: Jan 13 21:24:41.329914 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:24:41.329985 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:24:41.336787 systemd-resolved[209]: Defaulting to hostname 'linux'. Jan 13 21:24:41.338886 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:24:41.344610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:24:41.438368 kernel: SCSI subsystem initialized Jan 13 21:24:41.449362 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:24:41.462368 kernel: iscsi: registered transport (tcp) Jan 13 21:24:41.487516 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:24:41.487655 kernel: QLogic iSCSI HBA Driver Jan 13 21:24:41.546900 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:24:41.553601 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:24:41.601370 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:24:41.601499 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:24:41.603485 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:24:41.651396 kernel: raid6: avx2x4 gen() 24090 MB/s Jan 13 21:24:41.668397 kernel: raid6: avx2x2 gen() 23441 MB/s Jan 13 21:24:41.685828 kernel: raid6: avx2x1 gen() 20915 MB/s Jan 13 21:24:41.685902 kernel: raid6: using algorithm avx2x4 gen() 24090 MB/s Jan 13 21:24:41.703884 kernel: raid6: .... xor() 5871 MB/s, rmw enabled Jan 13 21:24:41.704004 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:24:41.729384 kernel: xor: automatically using best checksumming function avx Jan 13 21:24:41.906391 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:24:41.921363 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:24:41.932616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:24:41.951175 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 13 21:24:41.958576 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:24:41.967615 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:24:42.003134 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 13 21:24:42.044943 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:24:42.058595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:24:42.145799 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:24:42.156160 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:24:42.199254 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:24:42.223634 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:24:42.247512 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:24:42.272810 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:24:42.304119 kernel: scsi host0: Virtio SCSI HBA Jan 13 21:24:42.305834 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:24:42.298980 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:24:42.385895 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:24:42.387801 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:42.415403 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:24:42.416735 kernel: AES CTR mode by8 optimization enabled Jan 13 21:24:42.416777 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 21:24:42.435218 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:42.456322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:24:42.456979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:42.502272 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 21:24:42.547812 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 21:24:42.548649 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 21:24:42.548924 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 21:24:42.549193 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:24:42.549464 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:24:42.549493 kernel: GPT:17805311 != 25165823 Jan 13 21:24:42.549524 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:24:42.549547 kernel: GPT:17805311 != 25165823 Jan 13 21:24:42.549570 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:24:42.549595 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:24:42.549622 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 21:24:42.528418 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:42.564015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:42.588667 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:24:42.631510 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (461) Jan 13 21:24:42.631559 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (457) Jan 13 21:24:42.639814 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 21:24:42.664931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:42.692098 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 21:24:42.699476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:24:42.728712 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 21:24:42.743654 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 21:24:42.773625 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:24:42.801719 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:42.818689 disk-uuid[539]: Primary Header is updated. Jan 13 21:24:42.818689 disk-uuid[539]: Secondary Entries is updated. Jan 13 21:24:42.818689 disk-uuid[539]: Secondary Header is updated. Jan 13 21:24:42.852529 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:24:42.869342 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:24:42.884279 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:42.910530 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:24:43.888364 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:24:43.889283 disk-uuid[540]: The operation has completed successfully. Jan 13 21:24:43.972326 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:24:43.972516 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:24:44.003588 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:24:44.038503 sh[565]: Success Jan 13 21:24:44.065350 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:24:44.167160 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:24:44.175458 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:24:44.203023 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:24:44.242363 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:24:44.242484 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:44.259750 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:24:44.259878 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:24:44.266586 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:24:44.305339 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:24:44.386610 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:24:44.387694 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:24:44.393577 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:24:44.404509 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:24:44.474419 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:44.474510 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:44.474540 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:24:44.493613 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:24:44.493691 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:24:44.510181 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:24:44.528508 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:44.540240 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:24:44.563716 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:24:44.602364 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:24:44.624031 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:24:44.686026 systemd-networkd[747]: lo: Link UP Jan 13 21:24:44.686501 systemd-networkd[747]: lo: Gained carrier Jan 13 21:24:44.688761 systemd-networkd[747]: Enumeration completed Jan 13 21:24:44.689492 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:24:44.689909 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:44.689916 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:24:44.693095 systemd-networkd[747]: eth0: Link UP Jan 13 21:24:44.770177 ignition[708]: Ignition 2.19.0 Jan 13 21:24:44.693101 systemd-networkd[747]: eth0: Gained carrier Jan 13 21:24:44.770198 ignition[708]: Stage: fetch-offline Jan 13 21:24:44.693114 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:44.770246 ignition[708]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:44.712592 systemd-networkd[747]: eth0: DHCPv4 address 10.128.0.101/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:24:44.770257 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:44.728692 systemd[1]: Reached target network.target - Network. Jan 13 21:24:44.770450 ignition[708]: parsed url from cmdline: "" Jan 13 21:24:44.772563 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:24:44.770457 ignition[708]: no config URL provided Jan 13 21:24:44.795534 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:24:44.770464 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:24:44.832715 unknown[757]: fetched base config from "system" Jan 13 21:24:44.770474 ignition[708]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:24:44.832741 unknown[757]: fetched base config from "system" Jan 13 21:24:44.770483 ignition[708]: failed to fetch config: resource requires networking Jan 13 21:24:44.832752 unknown[757]: fetched user config from "gcp" Jan 13 21:24:44.770702 ignition[708]: Ignition finished successfully Jan 13 21:24:44.854883 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:24:44.821359 ignition[757]: Ignition 2.19.0 Jan 13 21:24:44.881520 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:24:44.821372 ignition[757]: Stage: fetch Jan 13 21:24:44.905278 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:24:44.821675 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:44.923537 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:24:44.821693 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:44.975612 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:24:44.821854 ignition[757]: parsed url from cmdline: "" Jan 13 21:24:44.976766 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:24:44.821861 ignition[757]: no config URL provided Jan 13 21:24:44.990637 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:24:44.821871 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:24:45.016641 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:24:44.821888 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:24:45.024655 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:24:44.821933 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 21:24:45.038678 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:24:44.825536 ignition[757]: GET result: OK Jan 13 21:24:45.062622 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:24:44.825688 ignition[757]: parsing config with SHA512: 7251ec6fa1bdaa872258df72ea5c6df2eceee305c535a8d423359f84a5c0638b47b4ea9d6f418b55abec5380e123e881b4b403279fb102c134113f9c56db23f7 Jan 13 21:24:44.833756 ignition[757]: fetch: fetch complete Jan 13 21:24:44.834693 ignition[757]: fetch: fetch passed Jan 13 21:24:44.834767 ignition[757]: Ignition finished successfully Jan 13 21:24:44.902848 ignition[764]: Ignition 2.19.0 Jan 13 21:24:44.902857 ignition[764]: Stage: kargs Jan 13 21:24:44.903055 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:44.903068 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:44.904093 ignition[764]: kargs: kargs passed Jan 13 21:24:44.904157 ignition[764]: Ignition finished successfully Jan 13 21:24:44.973206 ignition[769]: Ignition 2.19.0 Jan 13 21:24:44.973218 ignition[769]: Stage: disks Jan 13 21:24:44.973462 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:44.973475 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:44.974500 ignition[769]: disks: disks passed Jan 13 21:24:44.974559 ignition[769]: Ignition finished successfully Jan 13 21:24:45.115682 systemd-fsck[778]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:24:45.285481 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:24:45.290478 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:24:45.446347 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:24:45.447797 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:24:45.448993 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:24:45.479508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:24:45.490583 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:24:45.514925 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:24:45.515037 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:24:45.601720 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (786) Jan 13 21:24:45.601774 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:45.601793 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:45.601808 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:24:45.601824 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:24:45.601839 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:24:45.515088 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:24:45.572755 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:24:45.612479 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:24:45.636696 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:24:45.768585 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:24:45.779570 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:24:45.789559 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:24:45.799501 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:24:45.948873 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:24:45.954470 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:24:45.995368 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:45.999628 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:24:46.009901 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:24:46.042805 ignition[898]: INFO : Ignition 2.19.0 Jan 13 21:24:46.042805 ignition[898]: INFO : Stage: mount Jan 13 21:24:46.057652 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:46.057652 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:46.057652 ignition[898]: INFO : mount: mount passed Jan 13 21:24:46.057652 ignition[898]: INFO : Ignition finished successfully Jan 13 21:24:46.046149 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:24:46.078154 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:24:46.111957 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:24:46.370561 systemd-networkd[747]: eth0: Gained IPv6LL Jan 13 21:24:46.460610 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:24:46.484333 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (910) Jan 13 21:24:46.502340 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:46.502428 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:46.502454 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:24:46.526029 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:24:46.526115 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:24:46.529274 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:24:46.566836 ignition[927]: INFO : Ignition 2.19.0 Jan 13 21:24:46.566836 ignition[927]: INFO : Stage: files Jan 13 21:24:46.581456 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:46.581456 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:46.581456 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:24:46.581456 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:24:46.581456 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:24:46.581456 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:24:46.581456 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:24:46.581456 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:24:46.581456 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:24:46.581456 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:24:46.578998 unknown[927]: wrote ssh authorized keys file for user: core Jan 13 21:24:46.717462 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:24:46.855370 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:46.872485 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:24:47.167046 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:24:47.560652 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:47.560652 ignition[927]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:24:47.599484 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:24:47.599484 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:24:47.599484 ignition[927]: INFO : files: files passed Jan 13 21:24:47.599484 ignition[927]: INFO : Ignition finished successfully Jan 13 21:24:47.566018 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:24:47.585548 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:24:47.622536 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:24:47.665047 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:24:47.809598 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:47.809598 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:47.665164 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:24:47.875511 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:47.686019 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:24:47.720908 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:24:47.743520 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:24:47.804214 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:24:47.804365 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:24:47.820660 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:24:47.844515 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:24:47.865618 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:24:47.872512 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:24:47.920371 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:24:47.939512 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:24:47.985997 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:24:48.006663 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:24:48.030771 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:24:48.049676 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:24:48.049880 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:24:48.082820 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:24:48.102738 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:24:48.120741 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:24:48.138655 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:24:48.157703 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:24:48.179723 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:24:48.199660 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:24:48.219743 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:24:48.239789 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:24:48.259731 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:24:48.277625 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:24:48.277861 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:24:48.308787 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:24:48.326658 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:24:48.347673 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:24:48.347840 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:24:48.365602 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:24:48.365827 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:24:48.486516 ignition[980]: INFO : Ignition 2.19.0 Jan 13 21:24:48.486516 ignition[980]: INFO : Stage: umount Jan 13 21:24:48.486516 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:48.486516 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:24:48.486516 ignition[980]: INFO : umount: umount passed Jan 13 21:24:48.486516 ignition[980]: INFO : Ignition finished successfully Jan 13 21:24:48.394722 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:24:48.394958 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:24:48.415818 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:24:48.416028 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:24:48.443603 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:24:48.476462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:24:48.476910 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:24:48.504647 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:24:48.519444 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:24:48.519715 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:24:48.531778 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:24:48.532004 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:24:48.573343 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:24:48.574295 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:24:48.574447 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:24:48.589257 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:24:48.589414 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:24:48.611164 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:24:48.611278 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:24:48.633152 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:24:48.633225 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:24:48.650607 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:24:48.650703 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:24:48.668593 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:24:48.668726 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:24:48.688581 systemd[1]: Stopped target network.target - Network. Jan 13 21:24:48.706482 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:24:48.706612 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:24:48.728576 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:24:48.745475 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:24:48.747432 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:24:48.766479 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:24:48.783489 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:24:48.800565 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:24:48.800660 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:24:48.822080 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:24:48.822173 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:24:48.840526 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:24:48.840629 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:24:48.858564 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:24:48.858665 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:24:48.876572 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:24:48.876723 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:24:48.894851 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:24:48.897377 systemd-networkd[747]: eth0: DHCPv6 lease lost Jan 13 21:24:48.913801 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:24:48.932998 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:24:48.933133 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:24:48.945383 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:24:48.945642 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:24:48.972273 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:24:48.972399 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:24:48.985438 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:24:49.022437 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:24:49.484545 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 13 21:24:49.022561 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:24:49.040592 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:24:49.040702 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:49.058687 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:24:49.058770 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:24:49.068729 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:24:49.068802 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:24:49.086844 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:24:49.116005 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:24:49.116186 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:24:49.141649 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:24:49.141805 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:24:49.159557 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:24:49.159637 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:24:49.176674 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:24:49.176767 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:24:49.212643 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:24:49.212887 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:24:49.238708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:24:49.238796 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:49.288541 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:24:49.302432 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:24:49.302555 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:24:49.313534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:24:49.313619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:49.325066 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:24:49.325189 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:24:49.335060 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:24:49.335262 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:24:49.363243 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:24:49.386533 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:24:49.425044 systemd[1]: Switching root. Jan 13 21:24:49.806537 systemd-journald[183]: Journal stopped Jan 13 21:24:52.277537 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:24:52.277597 kernel: SELinux: policy capability open_perms=1 Jan 13 21:24:52.277618 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:24:52.277635 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:24:52.277651 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:24:52.277668 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:24:52.277696 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:24:52.277720 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:24:52.277818 kernel: audit: type=1403 audit(1736803490.113:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:24:52.277842 systemd[1]: Successfully loaded SELinux policy in 95.512ms. Jan 13 21:24:52.277864 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.593ms. Jan 13 21:24:52.277886 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:24:52.277906 systemd[1]: Detected virtualization google. Jan 13 21:24:52.277927 systemd[1]: Detected architecture x86-64. Jan 13 21:24:52.277953 systemd[1]: Detected first boot. Jan 13 21:24:52.277975 systemd[1]: Initializing machine ID from random generator. Jan 13 21:24:52.277996 zram_generator::config[1022]: No configuration found. Jan 13 21:24:52.278017 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:24:52.278039 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:24:52.278063 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:24:52.278084 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:24:52.278106 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:24:52.278127 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:24:52.278148 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:24:52.278172 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:24:52.278193 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:24:52.278219 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:24:52.278241 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:24:52.278429 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:24:52.278462 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:24:52.278482 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:24:52.278502 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:24:52.278521 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:24:52.278542 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:24:52.278571 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:24:52.278592 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:24:52.278610 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:24:52.278630 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:24:52.278649 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:24:52.278681 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:24:52.278712 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:24:52.278735 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:24:52.278757 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:24:52.278784 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:24:52.278807 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:24:52.278829 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:24:52.278851 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:24:52.278874 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:24:52.278897 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:24:52.278919 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:24:52.278946 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:24:52.278969 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:24:52.278992 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:24:52.279015 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:24:52.279037 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:52.279066 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:24:52.279089 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:24:52.279112 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:24:52.279135 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:24:52.279157 systemd[1]: Reached target machines.target - Containers. Jan 13 21:24:52.279180 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:24:52.279203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:24:52.279226 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:24:52.279252 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:24:52.279275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:24:52.279325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:24:52.279350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:24:52.279373 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:24:52.279395 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:24:52.279417 kernel: fuse: init (API version 7.39) Jan 13 21:24:52.279440 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:24:52.279467 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:24:52.279490 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:24:52.279512 kernel: ACPI: bus type drm_connector registered Jan 13 21:24:52.279532 kernel: loop: module loaded Jan 13 21:24:52.279553 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:24:52.279576 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:24:52.279600 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:24:52.279622 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:24:52.279687 systemd-journald[1109]: Collecting audit messages is disabled. Jan 13 21:24:52.279739 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:24:52.279763 systemd-journald[1109]: Journal started Jan 13 21:24:52.279810 systemd-journald[1109]: Runtime Journal (/run/log/journal/24d81e521de342b795d6429c4f418410) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:24:51.068598 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:24:51.093188 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 21:24:51.093801 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:24:52.302513 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:24:52.348181 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:24:52.348333 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:24:52.355343 systemd[1]: Stopped verity-setup.service. Jan 13 21:24:52.388051 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:52.396401 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:24:52.408911 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:24:52.418672 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:24:52.428727 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:24:52.438813 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:24:52.448791 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:24:52.458727 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:24:52.468964 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:24:52.481994 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:24:52.493996 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:24:52.494250 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:24:52.505892 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:24:52.506128 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:24:52.517885 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:24:52.518183 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:24:52.528951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:24:52.529194 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:24:52.540951 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:24:52.541187 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:24:52.551930 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:24:52.552178 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:24:52.562924 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:24:52.572958 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:24:52.584965 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:24:52.596901 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:24:52.622323 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:24:52.641515 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:24:52.663550 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:24:52.673504 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:24:52.673601 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:24:52.684954 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:24:52.703579 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:24:52.726597 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:24:52.736641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:24:52.743606 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:24:52.759635 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:24:52.769484 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:24:52.778998 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:24:52.789862 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:24:52.804375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:24:52.810343 systemd-journald[1109]: Time spent on flushing to /var/log/journal/24d81e521de342b795d6429c4f418410 is 103.946ms for 928 entries. Jan 13 21:24:52.810343 systemd-journald[1109]: System Journal (/var/log/journal/24d81e521de342b795d6429c4f418410) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:24:52.955279 systemd-journald[1109]: Received client request to flush runtime journal. Jan 13 21:24:52.955510 kernel: loop0: detected capacity change from 0 to 54824 Jan 13 21:24:52.832682 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:24:52.851541 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:24:52.871525 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:24:52.893155 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:24:52.904665 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:24:52.915932 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:24:52.927968 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:24:52.955542 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:52.965365 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:24:52.973244 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:24:52.995007 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:24:53.018606 kernel: loop1: detected capacity change from 0 to 210664 Jan 13 21:24:53.021856 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:24:53.035092 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:24:53.056761 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:24:53.072794 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:24:53.076456 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:24:53.091466 udevadm[1143]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:24:53.134345 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 21:24:53.137855 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jan 13 21:24:53.137894 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jan 13 21:24:53.152897 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:24:53.265351 kernel: loop3: detected capacity change from 0 to 140768 Jan 13 21:24:53.361031 kernel: loop4: detected capacity change from 0 to 54824 Jan 13 21:24:53.398336 kernel: loop5: detected capacity change from 0 to 210664 Jan 13 21:24:53.441350 kernel: loop6: detected capacity change from 0 to 142488 Jan 13 21:24:53.519335 kernel: loop7: detected capacity change from 0 to 140768 Jan 13 21:24:53.570872 (sd-merge)[1164]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 13 21:24:53.577534 (sd-merge)[1164]: Merged extensions into '/usr'. Jan 13 21:24:53.585201 systemd[1]: Reloading requested from client PID 1140 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:24:53.585224 systemd[1]: Reloading... Jan 13 21:24:53.735331 zram_generator::config[1186]: No configuration found. Jan 13 21:24:53.952835 ldconfig[1135]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:24:54.012687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:24:54.116873 systemd[1]: Reloading finished in 530 ms. Jan 13 21:24:54.146397 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:24:54.157017 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:24:54.181452 systemd[1]: Starting ensure-sysext.service... Jan 13 21:24:54.199722 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:24:54.219372 systemd[1]: Reloading requested from client PID 1230 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:24:54.219412 systemd[1]: Reloading... Jan 13 21:24:54.263718 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:24:54.265030 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:24:54.266882 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:24:54.268759 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 13 21:24:54.268979 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 13 21:24:54.279465 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:24:54.279668 systemd-tmpfiles[1231]: Skipping /boot Jan 13 21:24:54.305580 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:24:54.305772 systemd-tmpfiles[1231]: Skipping /boot Jan 13 21:24:54.331914 zram_generator::config[1254]: No configuration found. Jan 13 21:24:54.484950 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:24:54.550149 systemd[1]: Reloading finished in 330 ms. Jan 13 21:24:54.567507 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:24:54.586030 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:24:54.613807 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:24:54.638323 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:24:54.656914 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:24:54.673835 augenrules[1317]: No rules Jan 13 21:24:54.678502 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:24:54.694728 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:24:54.712827 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:24:54.727749 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:24:54.745120 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:24:54.759872 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Jan 13 21:24:54.767971 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:54.768742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:24:54.775714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:24:54.793585 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:24:54.813840 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:24:54.823628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:24:54.833748 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:24:54.854675 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:24:54.866216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:54.868748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:24:54.884400 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:24:54.897413 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:24:54.910219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:24:54.910507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:24:54.923343 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:24:54.923613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:24:54.934268 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:24:54.934558 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:24:54.946400 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:24:55.005384 systemd[1]: Finished ensure-sysext.service. Jan 13 21:24:55.014845 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:24:55.038278 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:55.039677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:24:55.050528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:24:55.067562 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:24:55.084555 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:24:55.104593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:24:55.121544 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:24:55.130574 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:24:55.149940 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:24:55.159743 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:24:55.169512 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:24:55.169559 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:55.170732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:24:55.172201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:24:55.185833 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:24:55.186387 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:24:55.202342 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:24:55.223040 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 13 21:24:55.246828 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1352) Jan 13 21:24:55.229983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:24:55.230246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:24:55.241738 systemd-resolved[1322]: Positive Trust Anchors: Jan 13 21:24:55.241755 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:24:55.241825 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:24:55.241956 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:24:55.242870 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:24:55.252485 systemd-resolved[1322]: Defaulting to hostname 'linux'. Jan 13 21:24:55.260466 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:24:55.270204 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 13 21:24:55.269801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:24:55.278436 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 21:24:55.285688 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:24:55.312776 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:24:55.427530 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 13 21:24:55.448347 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:24:55.451615 systemd-networkd[1374]: lo: Link UP Jan 13 21:24:55.451635 systemd-networkd[1374]: lo: Gained carrier Jan 13 21:24:55.453268 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:24:55.461999 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:24:55.455708 systemd-networkd[1374]: Enumeration completed Jan 13 21:24:55.456344 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:55.456351 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:24:55.457017 systemd-networkd[1374]: eth0: Link UP Jan 13 21:24:55.457024 systemd-networkd[1374]: eth0: Gained carrier Jan 13 21:24:55.457048 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:55.469505 systemd-networkd[1374]: eth0: DHCPv4 address 10.128.0.101/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:24:55.470633 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:24:55.488522 systemd[1]: Reached target network.target - Network. Jan 13 21:24:55.497509 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:24:55.516612 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 13 21:24:55.535182 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:24:55.557197 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:24:55.568499 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:24:55.568610 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:24:55.577574 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:55.588569 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:24:55.601070 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 13 21:24:55.601606 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:24:55.612643 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:24:55.632031 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:24:55.672707 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:24:55.673292 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:24:55.678556 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:24:55.694282 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:24:55.714882 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:55.727419 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:24:55.737834 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:24:55.749562 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:24:55.760739 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:24:55.770706 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:24:55.782521 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:24:55.793501 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:24:55.793564 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:24:55.802465 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:24:55.811391 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:24:55.823291 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:24:55.835051 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:24:55.846604 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:24:55.857876 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:24:55.868435 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:24:55.878521 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:24:55.887588 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:24:55.887642 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:24:55.899498 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:24:55.915569 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:24:55.936434 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:24:55.968758 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:24:55.987588 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:24:55.997475 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:24:56.005557 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:24:56.009596 jq[1421]: false Jan 13 21:24:56.026570 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:24:56.034005 coreos-metadata[1419]: Jan 13 21:24:56.033 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 13 21:24:56.041664 coreos-metadata[1419]: Jan 13 21:24:56.041 INFO Fetch successful Jan 13 21:24:56.041664 coreos-metadata[1419]: Jan 13 21:24:56.041 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 13 21:24:56.041944 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:24:56.043627 coreos-metadata[1419]: Jan 13 21:24:56.043 INFO Fetch successful Jan 13 21:24:56.043627 coreos-metadata[1419]: Jan 13 21:24:56.043 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 13 21:24:56.050342 coreos-metadata[1419]: Jan 13 21:24:56.048 INFO Fetch successful Jan 13 21:24:56.050342 coreos-metadata[1419]: Jan 13 21:24:56.048 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 13 21:24:56.053335 coreos-metadata[1419]: Jan 13 21:24:56.051 INFO Fetch successful Jan 13 21:24:56.059854 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:24:56.069362 extend-filesystems[1424]: Found loop4 Jan 13 21:24:56.069362 extend-filesystems[1424]: Found loop5 Jan 13 21:24:56.069362 extend-filesystems[1424]: Found loop6 Jan 13 21:24:56.069362 extend-filesystems[1424]: Found loop7 Jan 13 21:24:56.069362 extend-filesystems[1424]: Found sda Jan 13 21:24:56.069362 extend-filesystems[1424]: Found sda1 Jan 13 21:24:56.069362 extend-filesystems[1424]: Found sda2 Jan 13 21:24:56.069362 extend-filesystems[1424]: Found sda3 Jan 13 21:24:56.069362 extend-filesystems[1424]: Found usr Jan 13 21:24:56.069362 extend-filesystems[1424]: Found sda4 Jan 13 21:24:56.069362 extend-filesystems[1424]: Found sda6 Jan 13 21:24:56.069362 extend-filesystems[1424]: Found sda7 Jan 13 21:24:56.069362 extend-filesystems[1424]: Found sda9 Jan 13 21:24:56.069362 extend-filesystems[1424]: Checking size of /dev/sda9 Jan 13 21:24:56.290504 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 13 21:24:56.290581 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 13 21:24:56.290633 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1346) Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: ---------------------------------------------------- Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: corporation. Support and training for ntp-4 are Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: available at https://www.nwtime.org/support Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: ---------------------------------------------------- Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: proto: precision = 0.068 usec (-24) Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: basedate set to 2025-01-01 Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: gps base set to 2025-01-05 (week 2348) Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: Listen normally on 3 eth0 10.128.0.101:123 Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: Listen normally on 4 lo [::1]:123 Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: bind(21) AF_INET6 fe80::4001:aff:fe80:65%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:65%2#123 Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: failed to init interface for address fe80::4001:aff:fe80:65%2 Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: Listening on routing socket on fd #21 for interface updates Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:24:56.290704 ntpd[1426]: 13 Jan 21:24:56 ntpd[1426]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:24:56.107526 dbus-daemon[1420]: [system] SELinux support is enabled Jan 13 21:24:56.080604 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:24:56.295379 extend-filesystems[1424]: Resized partition /dev/sda9 Jan 13 21:24:56.117004 dbus-daemon[1420]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1374 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:24:56.100228 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:24:56.312865 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:24:56.312865 extend-filesystems[1446]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 21:24:56.312865 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 13 21:24:56.312865 extend-filesystems[1446]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 13 21:24:56.150098 ntpd[1426]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:24:56.352134 update_engine[1445]: I20250113 21:24:56.173424 1445 main.cc:92] Flatcar Update Engine starting Jan 13 21:24:56.352134 update_engine[1445]: I20250113 21:24:56.177635 1445 update_check_scheduler.cc:74] Next update check in 4m57s Jan 13 21:24:56.121119 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 13 21:24:56.354809 extend-filesystems[1424]: Resized filesystem in /dev/sda9 Jan 13 21:24:56.150135 ntpd[1426]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:24:56.122255 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:24:56.150150 ntpd[1426]: ---------------------------------------------------- Jan 13 21:24:56.131881 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:24:56.150166 ntpd[1426]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:24:56.370864 jq[1447]: true Jan 13 21:24:56.183488 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:24:56.150179 ntpd[1426]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:24:56.211527 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:24:56.150194 ntpd[1426]: corporation. Support and training for ntp-4 are Jan 13 21:24:56.249916 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:24:56.150208 ntpd[1426]: available at https://www.nwtime.org/support Jan 13 21:24:56.251423 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:24:56.150222 ntpd[1426]: ---------------------------------------------------- Jan 13 21:24:56.251959 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:24:56.152926 ntpd[1426]: proto: precision = 0.068 usec (-24) Jan 13 21:24:56.253687 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:24:56.155344 ntpd[1426]: basedate set to 2025-01-01 Jan 13 21:24:56.258013 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:24:56.155411 ntpd[1426]: gps base set to 2025-01-05 (week 2348) Jan 13 21:24:56.261635 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:24:56.171415 ntpd[1426]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:24:56.289918 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:24:56.171493 ntpd[1426]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:24:56.290404 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:24:56.182553 ntpd[1426]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:24:56.373119 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:24:56.182623 ntpd[1426]: Listen normally on 3 eth0 10.128.0.101:123 Jan 13 21:24:56.392423 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:24:56.182689 ntpd[1426]: Listen normally on 4 lo [::1]:123 Jan 13 21:24:56.182764 ntpd[1426]: bind(21) AF_INET6 fe80::4001:aff:fe80:65%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:24:56.182796 ntpd[1426]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:65%2#123 Jan 13 21:24:56.182820 ntpd[1426]: failed to init interface for address fe80::4001:aff:fe80:65%2 Jan 13 21:24:56.182868 ntpd[1426]: Listening on routing socket on fd #21 for interface updates Jan 13 21:24:56.200761 ntpd[1426]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:24:56.200801 ntpd[1426]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:24:56.342797 dbus-daemon[1420]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:24:56.408682 jq[1458]: true Jan 13 21:24:56.411879 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:24:56.413965 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:24:56.414388 systemd-logind[1439]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 21:24:56.414420 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:24:56.415201 systemd-logind[1439]: New seat seat0. Jan 13 21:24:56.419716 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:24:56.451101 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:24:56.452672 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:24:56.452903 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:24:56.481209 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:24:56.491468 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:24:56.493600 tar[1457]: linux-amd64/helm Jan 13 21:24:56.491756 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:24:56.514687 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:24:56.550061 systemd-networkd[1374]: eth0: Gained IPv6LL Jan 13 21:24:56.557376 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:24:56.572202 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:24:56.598765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:24:56.621867 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:24:56.639840 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 13 21:24:56.684116 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:24:56.696423 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:24:56.727858 systemd[1]: Starting sshkeys.service... Jan 13 21:24:56.759343 init.sh[1494]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 13 21:24:56.759343 init.sh[1494]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 13 21:24:56.759343 init.sh[1494]: + /usr/bin/google_instance_setup Jan 13 21:24:56.783186 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:24:56.829155 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:24:56.849106 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:24:56.896978 dbus-daemon[1420]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:24:56.897240 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:24:56.902502 dbus-daemon[1420]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1475 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:24:56.921950 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:24:56.935149 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:24:57.004094 coreos-metadata[1504]: Jan 13 21:24:57.003 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 13 21:24:57.009874 coreos-metadata[1504]: Jan 13 21:24:57.009 INFO Fetch failed with 404: resource not found Jan 13 21:24:57.009874 coreos-metadata[1504]: Jan 13 21:24:57.009 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 13 21:24:57.011947 coreos-metadata[1504]: Jan 13 21:24:57.010 INFO Fetch successful Jan 13 21:24:57.011947 coreos-metadata[1504]: Jan 13 21:24:57.010 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 13 21:24:57.011947 coreos-metadata[1504]: Jan 13 21:24:57.011 INFO Fetch failed with 404: resource not found Jan 13 21:24:57.011947 coreos-metadata[1504]: Jan 13 21:24:57.011 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 13 21:24:57.013534 coreos-metadata[1504]: Jan 13 21:24:57.012 INFO Fetch failed with 404: resource not found Jan 13 21:24:57.013534 coreos-metadata[1504]: Jan 13 21:24:57.012 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 13 21:24:57.020996 coreos-metadata[1504]: Jan 13 21:24:57.014 INFO Fetch successful Jan 13 21:24:57.029539 unknown[1504]: wrote ssh authorized keys file for user: core Jan 13 21:24:57.140344 update-ssh-keys[1518]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:24:57.146281 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:24:57.165039 systemd[1]: Finished sshkeys.service. Jan 13 21:24:57.181927 polkitd[1507]: Started polkitd version 121 Jan 13 21:24:57.186454 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:24:57.239833 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:24:57.245156 polkitd[1507]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:24:57.245293 polkitd[1507]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:24:57.257880 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:24:57.269567 polkitd[1507]: Finished loading, compiling and executing 2 rules Jan 13 21:24:57.281974 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:24:57.283722 dbus-daemon[1420]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:24:57.288381 polkitd[1507]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:24:57.296853 systemd[1]: Started sshd@0-10.128.0.101:22-147.75.109.163:39332.service - OpenSSH per-connection server daemon (147.75.109.163:39332). Jan 13 21:24:57.311904 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:24:57.379854 systemd-hostnamed[1475]: Hostname set to (transient) Jan 13 21:24:57.389055 systemd-resolved[1322]: System hostname changed to 'ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal'. Jan 13 21:24:57.394065 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:24:57.394450 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:24:57.412246 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:24:57.494978 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:24:57.517989 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:24:57.537056 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:24:57.548963 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:24:57.626100 containerd[1459]: time="2025-01-13T21:24:57.625504352Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:24:57.762210 containerd[1459]: time="2025-01-13T21:24:57.758609247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:57.767602 containerd[1459]: time="2025-01-13T21:24:57.767513915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:57.767822 containerd[1459]: time="2025-01-13T21:24:57.767797874Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:24:57.767922 containerd[1459]: time="2025-01-13T21:24:57.767903872Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:24:57.768340 containerd[1459]: time="2025-01-13T21:24:57.768278828Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:24:57.768495 containerd[1459]: time="2025-01-13T21:24:57.768471778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:57.768711 containerd[1459]: time="2025-01-13T21:24:57.768679802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:57.768806 containerd[1459]: time="2025-01-13T21:24:57.768787918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:57.769246 containerd[1459]: time="2025-01-13T21:24:57.769211341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:57.769481 containerd[1459]: time="2025-01-13T21:24:57.769452574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:57.769590 containerd[1459]: time="2025-01-13T21:24:57.769568760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:57.769669 containerd[1459]: time="2025-01-13T21:24:57.769647425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:57.769985 containerd[1459]: time="2025-01-13T21:24:57.769898273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:57.770573 containerd[1459]: time="2025-01-13T21:24:57.770542616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:57.771058 containerd[1459]: time="2025-01-13T21:24:57.771028228Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:57.771160 containerd[1459]: time="2025-01-13T21:24:57.771142177Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:24:57.771477 containerd[1459]: time="2025-01-13T21:24:57.771449177Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:24:57.771681 containerd[1459]: time="2025-01-13T21:24:57.771640437Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:24:57.785711 containerd[1459]: time="2025-01-13T21:24:57.784874680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:24:57.785711 containerd[1459]: time="2025-01-13T21:24:57.784997209Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:24:57.785711 containerd[1459]: time="2025-01-13T21:24:57.785029050Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:24:57.785711 containerd[1459]: time="2025-01-13T21:24:57.785059688Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:24:57.785711 containerd[1459]: time="2025-01-13T21:24:57.785090613Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:24:57.785711 containerd[1459]: time="2025-01-13T21:24:57.785477921Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.789584469Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.789981956Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790045673Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790077838Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790130158Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790180876Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790216919Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790255193Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790292835Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790381516Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790412969Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790459586Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790496173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791037 containerd[1459]: time="2025-01-13T21:24:57.790520047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790542812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790569043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790593440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790626846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790650645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790674986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790700238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790729247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790765022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790788610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790824420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790860640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790898524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790919433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.791818 containerd[1459]: time="2025-01-13T21:24:57.790938866Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:24:57.796128 containerd[1459]: time="2025-01-13T21:24:57.793742382Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:24:57.796128 containerd[1459]: time="2025-01-13T21:24:57.794000336Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:24:57.796128 containerd[1459]: time="2025-01-13T21:24:57.794027097Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:24:57.796128 containerd[1459]: time="2025-01-13T21:24:57.794051617Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:24:57.796128 containerd[1459]: time="2025-01-13T21:24:57.794071848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.796128 containerd[1459]: time="2025-01-13T21:24:57.794096744Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:24:57.796128 containerd[1459]: time="2025-01-13T21:24:57.794181117Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:24:57.796128 containerd[1459]: time="2025-01-13T21:24:57.794205271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:24:57.796672 containerd[1459]: time="2025-01-13T21:24:57.794819953Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:24:57.796672 containerd[1459]: time="2025-01-13T21:24:57.794960366Z" level=info msg="Connect containerd service" Jan 13 21:24:57.796672 containerd[1459]: time="2025-01-13T21:24:57.795038205Z" level=info msg="using legacy CRI server" Jan 13 21:24:57.796672 containerd[1459]: time="2025-01-13T21:24:57.795053174Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:24:57.796672 containerd[1459]: time="2025-01-13T21:24:57.795275783Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:24:57.801597 containerd[1459]: time="2025-01-13T21:24:57.799931067Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:24:57.801597 containerd[1459]: time="2025-01-13T21:24:57.801031679Z" level=info msg="Start subscribing containerd event" Jan 13 21:24:57.801597 containerd[1459]: time="2025-01-13T21:24:57.801126769Z" level=info msg="Start recovering state" Jan 13 21:24:57.801597 containerd[1459]: time="2025-01-13T21:24:57.801252927Z" level=info msg="Start event monitor" Jan 13 21:24:57.801597 containerd[1459]: time="2025-01-13T21:24:57.801279569Z" level=info msg="Start snapshots syncer" Jan 13 21:24:57.802662 containerd[1459]: time="2025-01-13T21:24:57.801295895Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:24:57.802662 containerd[1459]: time="2025-01-13T21:24:57.802124825Z" level=info msg="Start streaming server" Jan 13 21:24:57.803439 containerd[1459]: time="2025-01-13T21:24:57.801518226Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:24:57.805209 containerd[1459]: time="2025-01-13T21:24:57.803853847Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:24:57.805793 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:24:57.807903 containerd[1459]: time="2025-01-13T21:24:57.806448943Z" level=info msg="containerd successfully booted in 0.193271s" Jan 13 21:24:57.874061 sshd[1540]: Accepted publickey for core from 147.75.109.163 port 39332 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:57.883284 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:57.908881 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:24:57.929407 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:24:57.948908 systemd-logind[1439]: New session 1 of user core. Jan 13 21:24:57.993897 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:24:58.021026 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:24:58.069907 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:24:58.164337 tar[1457]: linux-amd64/LICENSE Jan 13 21:24:58.164337 tar[1457]: linux-amd64/README.md Jan 13 21:24:58.220714 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:24:58.335206 systemd[1556]: Queued start job for default target default.target. Jan 13 21:24:58.345631 systemd[1556]: Created slice app.slice - User Application Slice. Jan 13 21:24:58.345700 systemd[1556]: Reached target paths.target - Paths. Jan 13 21:24:58.345727 systemd[1556]: Reached target timers.target - Timers. Jan 13 21:24:58.351650 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:24:58.385142 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:24:58.386727 systemd[1556]: Reached target sockets.target - Sockets. Jan 13 21:24:58.386765 systemd[1556]: Reached target basic.target - Basic System. Jan 13 21:24:58.386860 systemd[1556]: Reached target default.target - Main User Target. Jan 13 21:24:58.386920 systemd[1556]: Startup finished in 296ms. Jan 13 21:24:58.388674 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:24:58.408917 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:24:58.444353 instance-setup[1500]: INFO Running google_set_multiqueue. Jan 13 21:24:58.461861 instance-setup[1500]: INFO Set channels for eth0 to 2. Jan 13 21:24:58.468578 instance-setup[1500]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 13 21:24:58.470495 instance-setup[1500]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 13 21:24:58.470581 instance-setup[1500]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 13 21:24:58.472796 instance-setup[1500]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 13 21:24:58.473124 instance-setup[1500]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 13 21:24:58.474763 instance-setup[1500]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 13 21:24:58.475413 instance-setup[1500]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 13 21:24:58.477275 instance-setup[1500]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 13 21:24:58.487988 instance-setup[1500]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 21:24:58.493167 instance-setup[1500]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 21:24:58.495281 instance-setup[1500]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 13 21:24:58.495365 instance-setup[1500]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 13 21:24:58.527757 init.sh[1494]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 13 21:24:58.680943 systemd[1]: Started sshd@1-10.128.0.101:22-147.75.109.163:42026.service - OpenSSH per-connection server daemon (147.75.109.163:42026). Jan 13 21:24:58.857103 startup-script[1598]: INFO Starting startup scripts. Jan 13 21:24:58.865501 startup-script[1598]: INFO No startup scripts found in metadata. Jan 13 21:24:58.865644 startup-script[1598]: INFO Finished running startup scripts. Jan 13 21:24:58.903073 init.sh[1494]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 13 21:24:58.903073 init.sh[1494]: + daemon_pids=() Jan 13 21:24:58.903073 init.sh[1494]: + for d in accounts clock_skew network Jan 13 21:24:58.903073 init.sh[1494]: + daemon_pids+=($!) Jan 13 21:24:58.903073 init.sh[1494]: + for d in accounts clock_skew network Jan 13 21:24:58.903073 init.sh[1494]: + daemon_pids+=($!) Jan 13 21:24:58.903073 init.sh[1494]: + for d in accounts clock_skew network Jan 13 21:24:58.903671 init.sh[1605]: + /usr/bin/google_accounts_daemon Jan 13 21:24:58.904164 init.sh[1606]: + /usr/bin/google_clock_skew_daemon Jan 13 21:24:58.904484 init.sh[1607]: + /usr/bin/google_network_daemon Jan 13 21:24:58.904771 init.sh[1494]: + daemon_pids+=($!) Jan 13 21:24:58.904771 init.sh[1494]: + NOTIFY_SOCKET=/run/systemd/notify Jan 13 21:24:58.904771 init.sh[1494]: + /usr/bin/systemd-notify --ready Jan 13 21:24:58.932334 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 13 21:24:58.947489 init.sh[1494]: + wait -n 1605 1606 1607 Jan 13 21:24:59.046162 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 42026 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:59.048967 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:59.071006 systemd-logind[1439]: New session 2 of user core. Jan 13 21:24:59.076454 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:24:59.151781 ntpd[1426]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:65%2]:123 Jan 13 21:24:59.152577 ntpd[1426]: 13 Jan 21:24:59 ntpd[1426]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:65%2]:123 Jan 13 21:24:59.293830 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:59.304914 systemd[1]: sshd@1-10.128.0.101:22-147.75.109.163:42026.service: Deactivated successfully. Jan 13 21:24:59.312939 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:24:59.317633 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:24:59.322662 systemd-logind[1439]: Removed session 2. Jan 13 21:24:59.361810 systemd[1]: Started sshd@2-10.128.0.101:22-147.75.109.163:42034.service - OpenSSH per-connection server daemon (147.75.109.163:42034). Jan 13 21:24:59.415672 google-clock-skew[1606]: INFO Starting Google Clock Skew daemon. Jan 13 21:24:59.434752 google-clock-skew[1606]: INFO Clock drift token has changed: 0. Jan 13 21:24:59.446374 google-networking[1607]: INFO Starting Google Networking daemon. Jan 13 21:24:59.510637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:24:59.523359 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:24:59.528426 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:24:59.532175 groupadd[1626]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 13 21:24:59.534257 systemd[1]: Startup finished in 1.043s (kernel) + 9.329s (initrd) + 9.504s (userspace) = 19.877s. Jan 13 21:24:59.537260 groupadd[1626]: group added to /etc/gshadow: name=google-sudoers Jan 13 21:24:59.616925 groupadd[1626]: new group: name=google-sudoers, GID=1000 Jan 13 21:24:59.655813 google-accounts[1605]: INFO Starting Google Accounts daemon. Jan 13 21:24:59.670576 google-accounts[1605]: WARNING OS Login not installed. Jan 13 21:24:59.672414 google-accounts[1605]: INFO Creating a new user account for 0. Jan 13 21:24:59.677895 init.sh[1642]: useradd: invalid user name '0': use --badname to ignore Jan 13 21:24:59.678408 google-accounts[1605]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 13 21:24:59.696943 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 42034 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:59.699520 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:59.707780 systemd-logind[1439]: New session 3 of user core. Jan 13 21:24:59.715679 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:24:59.916586 sshd[1620]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:59.922972 systemd[1]: sshd@2-10.128.0.101:22-147.75.109.163:42034.service: Deactivated successfully. Jan 13 21:24:59.927147 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:24:59.930395 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:24:59.932460 systemd-logind[1439]: Removed session 3. Jan 13 21:25:00.000125 systemd-resolved[1322]: Clock change detected. Flushing caches. Jan 13 21:25:00.003411 google-clock-skew[1606]: INFO Synced system time with hardware clock. Jan 13 21:25:00.290565 kubelet[1630]: E0113 21:25:00.290342 1630 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:25:00.294643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:25:00.294971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:25:00.295574 systemd[1]: kubelet.service: Consumed 1.400s CPU time. Jan 13 21:25:09.628238 systemd[1]: Started sshd@3-10.128.0.101:22-147.75.109.163:42264.service - OpenSSH per-connection server daemon (147.75.109.163:42264). Jan 13 21:25:09.926597 sshd[1656]: Accepted publickey for core from 147.75.109.163 port 42264 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:25:09.928936 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:09.935056 systemd-logind[1439]: New session 4 of user core. Jan 13 21:25:09.942986 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:25:10.143883 sshd[1656]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:10.149177 systemd[1]: sshd@3-10.128.0.101:22-147.75.109.163:42264.service: Deactivated successfully. Jan 13 21:25:10.152231 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:25:10.154812 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:25:10.156657 systemd-logind[1439]: Removed session 4. Jan 13 21:25:10.200254 systemd[1]: Started sshd@4-10.128.0.101:22-147.75.109.163:42272.service - OpenSSH per-connection server daemon (147.75.109.163:42272). Jan 13 21:25:10.435479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:25:10.446412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:10.496395 sshd[1663]: Accepted publickey for core from 147.75.109.163 port 42272 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:25:10.498649 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:10.506053 systemd-logind[1439]: New session 5 of user core. Jan 13 21:25:10.514135 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:25:10.707285 sshd[1663]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:10.717246 systemd[1]: sshd@4-10.128.0.101:22-147.75.109.163:42272.service: Deactivated successfully. Jan 13 21:25:10.723118 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:25:10.732525 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:25:10.759060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:10.763570 systemd-logind[1439]: Removed session 5. Jan 13 21:25:10.771476 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:25:10.774281 systemd[1]: Started sshd@5-10.128.0.101:22-147.75.109.163:42274.service - OpenSSH per-connection server daemon (147.75.109.163:42274). Jan 13 21:25:10.851531 kubelet[1677]: E0113 21:25:10.851435 1677 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:25:10.857175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:25:10.857469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:25:11.079821 sshd[1679]: Accepted publickey for core from 147.75.109.163 port 42274 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:25:11.082187 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:11.089163 systemd-logind[1439]: New session 6 of user core. Jan 13 21:25:11.105118 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:25:11.296200 sshd[1679]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:11.301397 systemd[1]: sshd@5-10.128.0.101:22-147.75.109.163:42274.service: Deactivated successfully. Jan 13 21:25:11.304214 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:25:11.306386 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:25:11.308033 systemd-logind[1439]: Removed session 6. Jan 13 21:25:11.358223 systemd[1]: Started sshd@6-10.128.0.101:22-147.75.109.163:42284.service - OpenSSH per-connection server daemon (147.75.109.163:42284). Jan 13 21:25:11.643789 sshd[1693]: Accepted publickey for core from 147.75.109.163 port 42284 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:25:11.645924 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:11.652771 systemd-logind[1439]: New session 7 of user core. Jan 13 21:25:11.660041 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:25:11.843540 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:25:11.844144 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:11.860241 sudo[1696]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:11.904556 sshd[1693]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:11.910792 systemd[1]: sshd@6-10.128.0.101:22-147.75.109.163:42284.service: Deactivated successfully. Jan 13 21:25:11.913911 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:25:11.915983 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:25:11.917618 systemd-logind[1439]: Removed session 7. Jan 13 21:25:11.965200 systemd[1]: Started sshd@7-10.128.0.101:22-147.75.109.163:42292.service - OpenSSH per-connection server daemon (147.75.109.163:42292). Jan 13 21:25:12.262254 sshd[1701]: Accepted publickey for core from 147.75.109.163 port 42292 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:25:12.264625 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:12.271397 systemd-logind[1439]: New session 8 of user core. Jan 13 21:25:12.283035 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:25:12.444291 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:25:12.444861 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:12.451198 sudo[1705]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:12.467655 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:25:12.468212 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:12.486291 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:25:12.501720 auditctl[1708]: No rules Jan 13 21:25:12.502428 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:25:12.502785 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:25:12.510488 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:25:12.562421 augenrules[1726]: No rules Jan 13 21:25:12.564627 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:25:12.567369 sudo[1704]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:12.611078 sshd[1701]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:12.617479 systemd[1]: sshd@7-10.128.0.101:22-147.75.109.163:42292.service: Deactivated successfully. Jan 13 21:25:12.619894 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:25:12.620882 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:25:12.622475 systemd-logind[1439]: Removed session 8. Jan 13 21:25:12.667358 systemd[1]: Started sshd@8-10.128.0.101:22-147.75.109.163:42298.service - OpenSSH per-connection server daemon (147.75.109.163:42298). Jan 13 21:25:12.953780 sshd[1734]: Accepted publickey for core from 147.75.109.163 port 42298 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:25:12.955847 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:12.962489 systemd-logind[1439]: New session 9 of user core. Jan 13 21:25:12.972954 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:25:13.135038 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:25:13.135542 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:13.586091 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:25:13.589203 (dockerd)[1753]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:25:14.042261 dockerd[1753]: time="2025-01-13T21:25:14.042087518Z" level=info msg="Starting up" Jan 13 21:25:14.220218 dockerd[1753]: time="2025-01-13T21:25:14.220106821Z" level=info msg="Loading containers: start." Jan 13 21:25:14.380847 kernel: Initializing XFRM netlink socket Jan 13 21:25:14.494667 systemd-networkd[1374]: docker0: Link UP Jan 13 21:25:14.517140 dockerd[1753]: time="2025-01-13T21:25:14.517075041Z" level=info msg="Loading containers: done." Jan 13 21:25:14.537796 dockerd[1753]: time="2025-01-13T21:25:14.537727189Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:25:14.538012 dockerd[1753]: time="2025-01-13T21:25:14.537870594Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:25:14.538086 dockerd[1753]: time="2025-01-13T21:25:14.538011863Z" level=info msg="Daemon has completed initialization" Jan 13 21:25:14.578770 dockerd[1753]: time="2025-01-13T21:25:14.578613270Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:25:14.579223 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:25:15.655021 containerd[1459]: time="2025-01-13T21:25:15.654958083Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 21:25:16.332458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount431818712.mount: Deactivated successfully. Jan 13 21:25:19.047507 containerd[1459]: time="2025-01-13T21:25:19.047389061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:19.049531 containerd[1459]: time="2025-01-13T21:25:19.049453588Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675648" Jan 13 21:25:19.051621 containerd[1459]: time="2025-01-13T21:25:19.051518281Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:19.057853 containerd[1459]: time="2025-01-13T21:25:19.057748863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:19.059732 containerd[1459]: time="2025-01-13T21:25:19.059439716Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.404410302s" Jan 13 21:25:19.059732 containerd[1459]: time="2025-01-13T21:25:19.059508714Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 21:25:19.098633 containerd[1459]: time="2025-01-13T21:25:19.098564421Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 21:25:21.108370 containerd[1459]: time="2025-01-13T21:25:21.107870929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:21.108055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:25:21.111771 containerd[1459]: time="2025-01-13T21:25:21.111554205Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606415" Jan 13 21:25:21.113246 containerd[1459]: time="2025-01-13T21:25:21.113178650Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:21.118786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:21.121040 containerd[1459]: time="2025-01-13T21:25:21.120972695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:21.122987 containerd[1459]: time="2025-01-13T21:25:21.122741844Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.024106804s" Jan 13 21:25:21.122987 containerd[1459]: time="2025-01-13T21:25:21.122804957Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 21:25:21.176893 containerd[1459]: time="2025-01-13T21:25:21.176587856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 21:25:21.336395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:21.345244 (kubelet)[1972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:25:21.407522 kubelet[1972]: E0113 21:25:21.407358 1972 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:25:21.411178 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:25:21.411436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:25:22.860384 containerd[1459]: time="2025-01-13T21:25:22.860307416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:22.862229 containerd[1459]: time="2025-01-13T21:25:22.862173591Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783041" Jan 13 21:25:22.863554 containerd[1459]: time="2025-01-13T21:25:22.863518640Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:22.867860 containerd[1459]: time="2025-01-13T21:25:22.867778796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:22.870292 containerd[1459]: time="2025-01-13T21:25:22.869567477Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.692922838s" Jan 13 21:25:22.870292 containerd[1459]: time="2025-01-13T21:25:22.869615720Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 21:25:22.901684 containerd[1459]: time="2025-01-13T21:25:22.901074750Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:25:24.247982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284127549.mount: Deactivated successfully. Jan 13 21:25:24.814799 containerd[1459]: time="2025-01-13T21:25:24.814731435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:24.816125 containerd[1459]: time="2025-01-13T21:25:24.816059816Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057476" Jan 13 21:25:24.817823 containerd[1459]: time="2025-01-13T21:25:24.817755362Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:24.820955 containerd[1459]: time="2025-01-13T21:25:24.820880158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:24.822154 containerd[1459]: time="2025-01-13T21:25:24.821947257Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.920029659s" Jan 13 21:25:24.822154 containerd[1459]: time="2025-01-13T21:25:24.821999008Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:25:24.851958 containerd[1459]: time="2025-01-13T21:25:24.851873976Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:25:25.361251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942204883.mount: Deactivated successfully. Jan 13 21:25:26.473279 containerd[1459]: time="2025-01-13T21:25:26.473184340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:26.475372 containerd[1459]: time="2025-01-13T21:25:26.475047431Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185767" Jan 13 21:25:26.476814 containerd[1459]: time="2025-01-13T21:25:26.476726868Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:26.481004 containerd[1459]: time="2025-01-13T21:25:26.480919467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:26.482925 containerd[1459]: time="2025-01-13T21:25:26.482561408Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.630630553s" Jan 13 21:25:26.482925 containerd[1459]: time="2025-01-13T21:25:26.482636259Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:25:26.521262 containerd[1459]: time="2025-01-13T21:25:26.521206360Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:25:26.981176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229887349.mount: Deactivated successfully. Jan 13 21:25:26.996129 containerd[1459]: time="2025-01-13T21:25:26.996033562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:26.998766 containerd[1459]: time="2025-01-13T21:25:26.998570928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322296" Jan 13 21:25:27.000321 containerd[1459]: time="2025-01-13T21:25:27.000193559Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:27.005612 containerd[1459]: time="2025-01-13T21:25:27.005498274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:27.007132 containerd[1459]: time="2025-01-13T21:25:27.006805701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 485.532047ms" Jan 13 21:25:27.007132 containerd[1459]: time="2025-01-13T21:25:27.006862376Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:25:27.042151 containerd[1459]: time="2025-01-13T21:25:27.042091749Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 21:25:27.068205 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:25:27.565425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435040953.mount: Deactivated successfully. Jan 13 21:25:30.752670 containerd[1459]: time="2025-01-13T21:25:30.752574274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:30.754674 containerd[1459]: time="2025-01-13T21:25:30.754573285Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238577" Jan 13 21:25:30.756784 containerd[1459]: time="2025-01-13T21:25:30.756658043Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:30.764259 containerd[1459]: time="2025-01-13T21:25:30.764131068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:30.766630 containerd[1459]: time="2025-01-13T21:25:30.765949834Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.723791983s" Jan 13 21:25:30.766630 containerd[1459]: time="2025-01-13T21:25:30.766017292Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 21:25:31.494738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:25:31.503171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:31.806015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:31.823580 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:25:31.898737 kubelet[2176]: E0113 21:25:31.898328 2176 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:25:31.902505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:25:31.903597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:25:34.219352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:34.233181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:34.267142 systemd[1]: Reloading requested from client PID 2191 ('systemctl') (unit session-9.scope)... Jan 13 21:25:34.267166 systemd[1]: Reloading... Jan 13 21:25:34.436781 zram_generator::config[2237]: No configuration found. Jan 13 21:25:34.583713 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:34.685637 systemd[1]: Reloading finished in 417 ms. Jan 13 21:25:34.753160 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:25:34.753409 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:25:34.753897 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:34.759071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:35.034892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:35.046277 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:25:35.105326 kubelet[2282]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:35.105326 kubelet[2282]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:25:35.105326 kubelet[2282]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:35.107754 kubelet[2282]: I0113 21:25:35.107650 2282 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:25:36.428995 kubelet[2282]: I0113 21:25:36.428920 2282 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:25:36.428995 kubelet[2282]: I0113 21:25:36.428969 2282 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:25:36.429749 kubelet[2282]: I0113 21:25:36.429383 2282 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:25:36.456666 kubelet[2282]: I0113 21:25:36.456293 2282 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:25:36.460250 kubelet[2282]: E0113 21:25:36.460075 2282 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:36.487776 kubelet[2282]: I0113 21:25:36.487670 2282 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:25:36.488457 kubelet[2282]: I0113 21:25:36.488197 2282 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:25:36.488609 kubelet[2282]: I0113 21:25:36.488252 2282 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:25:36.488609 kubelet[2282]: I0113 21:25:36.488605 2282 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:25:36.488904 kubelet[2282]: I0113 21:25:36.488626 2282 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:25:36.488904 kubelet[2282]: I0113 21:25:36.488895 2282 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:36.490757 kubelet[2282]: I0113 21:25:36.490707 2282 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:25:36.490757 kubelet[2282]: I0113 21:25:36.490752 2282 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:25:36.490985 kubelet[2282]: I0113 21:25:36.490801 2282 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:25:36.490985 kubelet[2282]: I0113 21:25:36.490836 2282 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:25:36.499644 kubelet[2282]: W0113 21:25:36.499032 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:36.499644 kubelet[2282]: E0113 21:25:36.499300 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:36.499644 kubelet[2282]: W0113 21:25:36.499480 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:36.499644 kubelet[2282]: E0113 21:25:36.499556 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:36.500088 kubelet[2282]: I0113 21:25:36.499993 2282 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:25:36.504497 kubelet[2282]: I0113 21:25:36.502550 2282 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:25:36.504497 kubelet[2282]: W0113 21:25:36.502685 2282 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:25:36.504497 kubelet[2282]: I0113 21:25:36.504241 2282 server.go:1264] "Started kubelet" Jan 13 21:25:36.510203 kubelet[2282]: I0113 21:25:36.509165 2282 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:25:36.510963 kubelet[2282]: I0113 21:25:36.510914 2282 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:25:36.512729 kubelet[2282]: I0113 21:25:36.512412 2282 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:25:36.513201 kubelet[2282]: I0113 21:25:36.513132 2282 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:25:36.514082 kubelet[2282]: E0113 21:25:36.513429 2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal.181a5d9d4df97de5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,UID:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:25:36.504192485 +0000 UTC m=+1.452368463,LastTimestamp:2025-01-13 21:25:36.504192485 +0000 UTC m=+1.452368463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,}" Jan 13 21:25:36.515562 kubelet[2282]: I0113 21:25:36.515510 2282 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:25:36.522043 kubelet[2282]: E0113 21:25:36.520665 2282 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" not found" Jan 13 21:25:36.522043 kubelet[2282]: I0113 21:25:36.520760 2282 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:25:36.522043 kubelet[2282]: I0113 21:25:36.520995 2282 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:25:36.522043 kubelet[2282]: I0113 21:25:36.521094 2282 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:25:36.522043 kubelet[2282]: W0113 21:25:36.521762 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:36.522043 kubelet[2282]: E0113 21:25:36.521849 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:36.523100 kubelet[2282]: E0113 21:25:36.522751 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.101:6443: connect: connection refused" interval="200ms" Jan 13 21:25:36.525120 kubelet[2282]: E0113 21:25:36.524288 2282 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:25:36.526308 kubelet[2282]: I0113 21:25:36.526283 2282 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:25:36.526596 kubelet[2282]: I0113 21:25:36.526550 2282 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:25:36.529730 kubelet[2282]: I0113 21:25:36.528271 2282 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:25:36.562374 kubelet[2282]: I0113 21:25:36.562052 2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:25:36.567490 kubelet[2282]: I0113 21:25:36.567435 2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:25:36.568367 kubelet[2282]: I0113 21:25:36.567752 2282 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:25:36.568367 kubelet[2282]: I0113 21:25:36.567905 2282 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:25:36.568367 kubelet[2282]: E0113 21:25:36.568144 2282 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:25:36.583231 kubelet[2282]: W0113 21:25:36.583159 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:36.583231 kubelet[2282]: E0113 21:25:36.583225 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:36.593598 kubelet[2282]: I0113 21:25:36.593465 2282 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:25:36.593598 kubelet[2282]: I0113 21:25:36.593498 2282 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:25:36.593598 kubelet[2282]: I0113 21:25:36.593535 2282 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:36.631509 kubelet[2282]: I0113 21:25:36.631445 2282 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:36.665044 kubelet[2282]: E0113 21:25:36.632040 2282 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.101:6443/api/v1/nodes\": dial tcp 10.128.0.101:6443: connect: connection refused" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:36.668434 kubelet[2282]: E0113 21:25:36.668360 2282 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:25:36.684529 kubelet[2282]: I0113 21:25:36.684156 2282 policy_none.go:49] "None policy: Start" Jan 13 21:25:36.687446 kubelet[2282]: I0113 21:25:36.687356 2282 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:25:36.687446 kubelet[2282]: I0113 21:25:36.687426 2282 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:25:36.723560 kubelet[2282]: E0113 21:25:36.723457 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.101:6443: connect: connection refused" interval="400ms" Jan 13 21:25:36.838570 kubelet[2282]: I0113 21:25:36.838499 2282 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:36.869855 kubelet[2282]: E0113 21:25:36.839114 2282 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.101:6443/api/v1/nodes\": dial tcp 10.128.0.101:6443: connect: connection refused" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:36.871228 kubelet[2282]: E0113 21:25:36.869961 2282 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:25:36.925174 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:25:36.936220 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:25:36.941222 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:25:36.950311 kubelet[2282]: I0113 21:25:36.950011 2282 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:25:36.950508 kubelet[2282]: I0113 21:25:36.950395 2282 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:25:36.950617 kubelet[2282]: I0113 21:25:36.950597 2282 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:25:36.953892 kubelet[2282]: E0113 21:25:36.953628 2282 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" not found" Jan 13 21:25:37.125242 kubelet[2282]: E0113 21:25:37.125151 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.101:6443: connect: connection refused" interval="800ms" Jan 13 21:25:37.249275 kubelet[2282]: I0113 21:25:37.249235 2282 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.249758 kubelet[2282]: E0113 21:25:37.249711 2282 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.101:6443/api/v1/nodes\": dial tcp 10.128.0.101:6443: connect: connection refused" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.271153 kubelet[2282]: I0113 21:25:37.271065 2282 topology_manager.go:215] "Topology Admit Handler" podUID="4ca76b05aa2fee9f9b77f7c193ee6656" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.275855 kubelet[2282]: I0113 21:25:37.275647 2282 topology_manager.go:215] "Topology Admit Handler" podUID="5c313c970a262743066dd57bcd61ede6" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.281804 kubelet[2282]: I0113 21:25:37.281516 2282 topology_manager.go:215] "Topology Admit Handler" podUID="4d2b7920b7ed7b8204684dea90ec770d" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.290118 systemd[1]: Created slice kubepods-burstable-pod4ca76b05aa2fee9f9b77f7c193ee6656.slice - libcontainer container kubepods-burstable-pod4ca76b05aa2fee9f9b77f7c193ee6656.slice. Jan 13 21:25:37.307203 systemd[1]: Created slice kubepods-burstable-pod5c313c970a262743066dd57bcd61ede6.slice - libcontainer container kubepods-burstable-pod5c313c970a262743066dd57bcd61ede6.slice. Jan 13 21:25:37.324571 systemd[1]: Created slice kubepods-burstable-pod4d2b7920b7ed7b8204684dea90ec770d.slice - libcontainer container kubepods-burstable-pod4d2b7920b7ed7b8204684dea90ec770d.slice. Jan 13 21:25:37.326178 kubelet[2282]: I0113 21:25:37.325971 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ca76b05aa2fee9f9b77f7c193ee6656-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4ca76b05aa2fee9f9b77f7c193ee6656\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.326178 kubelet[2282]: I0113 21:25:37.326024 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ca76b05aa2fee9f9b77f7c193ee6656-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4ca76b05aa2fee9f9b77f7c193ee6656\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.326178 kubelet[2282]: I0113 21:25:37.326058 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c313c970a262743066dd57bcd61ede6-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"5c313c970a262743066dd57bcd61ede6\") " pod="kube-system/kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.326178 kubelet[2282]: I0113 21:25:37.326088 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d2b7920b7ed7b8204684dea90ec770d-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4d2b7920b7ed7b8204684dea90ec770d\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.326553 kubelet[2282]: I0113 21:25:37.326120 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ca76b05aa2fee9f9b77f7c193ee6656-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4ca76b05aa2fee9f9b77f7c193ee6656\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.326553 kubelet[2282]: I0113 21:25:37.326152 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4ca76b05aa2fee9f9b77f7c193ee6656-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4ca76b05aa2fee9f9b77f7c193ee6656\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.326553 kubelet[2282]: I0113 21:25:37.326186 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ca76b05aa2fee9f9b77f7c193ee6656-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4ca76b05aa2fee9f9b77f7c193ee6656\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.326553 kubelet[2282]: I0113 21:25:37.326260 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d2b7920b7ed7b8204684dea90ec770d-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4d2b7920b7ed7b8204684dea90ec770d\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.326788 kubelet[2282]: I0113 21:25:37.326297 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d2b7920b7ed7b8204684dea90ec770d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4d2b7920b7ed7b8204684dea90ec770d\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:37.605030 containerd[1459]: time="2025-01-13T21:25:37.604866741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,Uid:4ca76b05aa2fee9f9b77f7c193ee6656,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:37.611899 containerd[1459]: time="2025-01-13T21:25:37.611846360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,Uid:5c313c970a262743066dd57bcd61ede6,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:37.629002 containerd[1459]: time="2025-01-13T21:25:37.628943678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,Uid:4d2b7920b7ed7b8204684dea90ec770d,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:37.808236 kubelet[2282]: W0113 21:25:37.808148 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:37.808236 kubelet[2282]: E0113 21:25:37.808240 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:37.926775 kubelet[2282]: E0113 21:25:37.926563 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.101:6443: connect: connection refused" interval="1.6s" Jan 13 21:25:37.997998 kubelet[2282]: W0113 21:25:37.997888 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:37.997998 kubelet[2282]: E0113 21:25:37.998000 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:38.022644 kubelet[2282]: W0113 21:25:38.022529 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:38.022644 kubelet[2282]: E0113 21:25:38.022644 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:38.054357 kubelet[2282]: I0113 21:25:38.054311 2282 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:38.054854 kubelet[2282]: E0113 21:25:38.054808 2282 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.101:6443/api/v1/nodes\": dial tcp 10.128.0.101:6443: connect: connection refused" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:38.087605 kubelet[2282]: W0113 21:25:38.087504 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:38.087605 kubelet[2282]: E0113 21:25:38.087573 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:38.092048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1855495180.mount: Deactivated successfully. Jan 13 21:25:38.103136 containerd[1459]: time="2025-01-13T21:25:38.103075222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:38.104435 containerd[1459]: time="2025-01-13T21:25:38.104370039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Jan 13 21:25:38.105796 containerd[1459]: time="2025-01-13T21:25:38.105749355Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:38.107154 containerd[1459]: time="2025-01-13T21:25:38.107113497Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:38.108340 containerd[1459]: time="2025-01-13T21:25:38.108282051Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:25:38.110828 containerd[1459]: time="2025-01-13T21:25:38.110777928Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:38.113353 containerd[1459]: time="2025-01-13T21:25:38.111862815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:25:38.117941 containerd[1459]: time="2025-01-13T21:25:38.117827479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:38.120184 containerd[1459]: time="2025-01-13T21:25:38.119448841Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 507.500443ms" Jan 13 21:25:38.121095 containerd[1459]: time="2025-01-13T21:25:38.121052660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.016674ms" Jan 13 21:25:38.123027 containerd[1459]: time="2025-01-13T21:25:38.122978396Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.998893ms" Jan 13 21:25:38.320514 containerd[1459]: time="2025-01-13T21:25:38.320032875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:38.320514 containerd[1459]: time="2025-01-13T21:25:38.320105245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:38.320514 containerd[1459]: time="2025-01-13T21:25:38.320146599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:38.320514 containerd[1459]: time="2025-01-13T21:25:38.320283766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:38.323585 containerd[1459]: time="2025-01-13T21:25:38.323368490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:38.323585 containerd[1459]: time="2025-01-13T21:25:38.323433298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:38.323975 containerd[1459]: time="2025-01-13T21:25:38.323460707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:38.324810 containerd[1459]: time="2025-01-13T21:25:38.324493257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:38.333001 containerd[1459]: time="2025-01-13T21:25:38.331968525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:38.333001 containerd[1459]: time="2025-01-13T21:25:38.332062326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:38.333001 containerd[1459]: time="2025-01-13T21:25:38.332091351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:38.333001 containerd[1459]: time="2025-01-13T21:25:38.332238768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:38.371940 systemd[1]: Started cri-containerd-d7e433e998845e353e38d10ce3f1d02f70132159eeaab15d7fcd0e96465ccea8.scope - libcontainer container d7e433e998845e353e38d10ce3f1d02f70132159eeaab15d7fcd0e96465ccea8. Jan 13 21:25:38.391390 systemd[1]: Started cri-containerd-146b10d7b61ac9203a770f0ef0c04aae352454a237bedcdbaf556dc67b43afc5.scope - libcontainer container 146b10d7b61ac9203a770f0ef0c04aae352454a237bedcdbaf556dc67b43afc5. Jan 13 21:25:38.394095 systemd[1]: Started cri-containerd-9001ce42b11b9bed6544cbe8e722878f35d696095aaae8a8cea403b48a279de9.scope - libcontainer container 9001ce42b11b9bed6544cbe8e722878f35d696095aaae8a8cea403b48a279de9. Jan 13 21:25:38.484872 containerd[1459]: time="2025-01-13T21:25:38.484817613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,Uid:4ca76b05aa2fee9f9b77f7c193ee6656,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7e433e998845e353e38d10ce3f1d02f70132159eeaab15d7fcd0e96465ccea8\"" Jan 13 21:25:38.489887 kubelet[2282]: E0113 21:25:38.489503 2282 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flat" Jan 13 21:25:38.496134 containerd[1459]: time="2025-01-13T21:25:38.495907541Z" level=info msg="CreateContainer within sandbox \"d7e433e998845e353e38d10ce3f1d02f70132159eeaab15d7fcd0e96465ccea8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:25:38.512950 containerd[1459]: time="2025-01-13T21:25:38.512783387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,Uid:4d2b7920b7ed7b8204684dea90ec770d,Namespace:kube-system,Attempt:0,} returns sandbox id \"146b10d7b61ac9203a770f0ef0c04aae352454a237bedcdbaf556dc67b43afc5\"" Jan 13 21:25:38.516986 kubelet[2282]: E0113 21:25:38.516403 2282 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-21291" Jan 13 21:25:38.518997 containerd[1459]: time="2025-01-13T21:25:38.518951637Z" level=info msg="CreateContainer within sandbox \"146b10d7b61ac9203a770f0ef0c04aae352454a237bedcdbaf556dc67b43afc5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:25:38.524999 containerd[1459]: time="2025-01-13T21:25:38.524881484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,Uid:5c313c970a262743066dd57bcd61ede6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9001ce42b11b9bed6544cbe8e722878f35d696095aaae8a8cea403b48a279de9\"" Jan 13 21:25:38.527462 kubelet[2282]: E0113 21:25:38.527426 2282 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-21291" Jan 13 21:25:38.528837 containerd[1459]: time="2025-01-13T21:25:38.528800900Z" level=info msg="CreateContainer within sandbox \"d7e433e998845e353e38d10ce3f1d02f70132159eeaab15d7fcd0e96465ccea8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"62a990971ca5e6f4b26d6a97d8da7f1dafea4d9144977ac088a0a7d9d382a798\"" Jan 13 21:25:38.529978 containerd[1459]: time="2025-01-13T21:25:38.529812287Z" level=info msg="StartContainer for \"62a990971ca5e6f4b26d6a97d8da7f1dafea4d9144977ac088a0a7d9d382a798\"" Jan 13 21:25:38.529978 containerd[1459]: time="2025-01-13T21:25:38.529837275Z" level=info msg="CreateContainer within sandbox \"9001ce42b11b9bed6544cbe8e722878f35d696095aaae8a8cea403b48a279de9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:25:38.550512 containerd[1459]: time="2025-01-13T21:25:38.550341189Z" level=info msg="CreateContainer within sandbox \"146b10d7b61ac9203a770f0ef0c04aae352454a237bedcdbaf556dc67b43afc5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ced176f0ed3e0f4f8cc2bd939056cee83212e2469591bd0d5fb5b7dd43dc50b9\"" Jan 13 21:25:38.551432 containerd[1459]: time="2025-01-13T21:25:38.551275548Z" level=info msg="StartContainer for \"ced176f0ed3e0f4f8cc2bd939056cee83212e2469591bd0d5fb5b7dd43dc50b9\"" Jan 13 21:25:38.553140 containerd[1459]: time="2025-01-13T21:25:38.553097384Z" level=info msg="CreateContainer within sandbox \"9001ce42b11b9bed6544cbe8e722878f35d696095aaae8a8cea403b48a279de9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"71aba5bcd18e60588e3f196022a580736be85c6723cb7a7d4d7d271fc92d7537\"" Jan 13 21:25:38.554396 containerd[1459]: time="2025-01-13T21:25:38.554193670Z" level=info msg="StartContainer for \"71aba5bcd18e60588e3f196022a580736be85c6723cb7a7d4d7d271fc92d7537\"" Jan 13 21:25:38.558643 kubelet[2282]: E0113 21:25:38.558509 2282 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.101:6443: connect: connection refused Jan 13 21:25:38.581940 systemd[1]: Started cri-containerd-62a990971ca5e6f4b26d6a97d8da7f1dafea4d9144977ac088a0a7d9d382a798.scope - libcontainer container 62a990971ca5e6f4b26d6a97d8da7f1dafea4d9144977ac088a0a7d9d382a798. Jan 13 21:25:38.641975 systemd[1]: Started cri-containerd-71aba5bcd18e60588e3f196022a580736be85c6723cb7a7d4d7d271fc92d7537.scope - libcontainer container 71aba5bcd18e60588e3f196022a580736be85c6723cb7a7d4d7d271fc92d7537. Jan 13 21:25:38.655266 systemd[1]: Started cri-containerd-ced176f0ed3e0f4f8cc2bd939056cee83212e2469591bd0d5fb5b7dd43dc50b9.scope - libcontainer container ced176f0ed3e0f4f8cc2bd939056cee83212e2469591bd0d5fb5b7dd43dc50b9. Jan 13 21:25:38.695721 containerd[1459]: time="2025-01-13T21:25:38.695579339Z" level=info msg="StartContainer for \"62a990971ca5e6f4b26d6a97d8da7f1dafea4d9144977ac088a0a7d9d382a798\" returns successfully" Jan 13 21:25:38.757767 containerd[1459]: time="2025-01-13T21:25:38.757464521Z" level=info msg="StartContainer for \"71aba5bcd18e60588e3f196022a580736be85c6723cb7a7d4d7d271fc92d7537\" returns successfully" Jan 13 21:25:38.759919 containerd[1459]: time="2025-01-13T21:25:38.759860978Z" level=info msg="StartContainer for \"ced176f0ed3e0f4f8cc2bd939056cee83212e2469591bd0d5fb5b7dd43dc50b9\" returns successfully" Jan 13 21:25:39.661534 kubelet[2282]: I0113 21:25:39.661479 2282 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:41.311739 update_engine[1445]: I20250113 21:25:41.310765 1445 update_attempter.cc:509] Updating boot flags... Jan 13 21:25:41.463782 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2565) Jan 13 21:25:41.693745 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2568) Jan 13 21:25:41.885749 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2568) Jan 13 21:25:43.503490 kubelet[2282]: I0113 21:25:43.502234 2282 apiserver.go:52] "Watching apiserver" Jan 13 21:25:43.579732 kubelet[2282]: E0113 21:25:43.579653 2282 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:43.621528 kubelet[2282]: I0113 21:25:43.621419 2282 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:25:43.642152 kubelet[2282]: E0113 21:25:43.641429 2282 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal.181a5d9d4df97de5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,UID:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:25:36.504192485 +0000 UTC m=+1.452368463,LastTimestamp:2025-01-13 21:25:36.504192485 +0000 UTC m=+1.452368463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,}" Jan 13 21:25:43.691796 kubelet[2282]: I0113 21:25:43.689995 2282 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:43.704334 kubelet[2282]: E0113 21:25:43.704119 2282 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal.181a5d9d4f2bcb6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,UID:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:25:36.524266351 +0000 UTC m=+1.472442332,LastTimestamp:2025-01-13 21:25:36.524266351 +0000 UTC m=+1.472442332,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal,}" Jan 13 21:25:44.578901 kubelet[2282]: W0113 21:25:44.578846 2282 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:25:45.512438 systemd[1]: Reloading requested from client PID 2578 ('systemctl') (unit session-9.scope)... Jan 13 21:25:45.512461 systemd[1]: Reloading... Jan 13 21:25:45.653753 zram_generator::config[2618]: No configuration found. Jan 13 21:25:45.768432 kubelet[2282]: W0113 21:25:45.768139 2282 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:25:45.796454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:45.925180 systemd[1]: Reloading finished in 411 ms. Jan 13 21:25:45.977964 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:45.983816 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:25:45.984120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:45.984188 systemd[1]: kubelet.service: Consumed 2.118s CPU time, 117.7M memory peak, 0B memory swap peak. Jan 13 21:25:45.991109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:46.263010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:46.278511 (kubelet)[2666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:25:46.373452 kubelet[2666]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:46.374342 kubelet[2666]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:25:46.374342 kubelet[2666]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:46.374342 kubelet[2666]: I0113 21:25:46.374168 2666 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:25:46.384641 kubelet[2666]: I0113 21:25:46.384603 2666 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:25:46.384641 kubelet[2666]: I0113 21:25:46.384634 2666 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:25:46.384964 kubelet[2666]: I0113 21:25:46.384941 2666 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:25:46.388135 kubelet[2666]: I0113 21:25:46.387805 2666 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:25:46.390816 kubelet[2666]: I0113 21:25:46.390486 2666 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:25:46.403288 kubelet[2666]: I0113 21:25:46.403235 2666 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:25:46.403816 kubelet[2666]: I0113 21:25:46.403744 2666 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:25:46.404133 kubelet[2666]: I0113 21:25:46.403801 2666 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:25:46.404338 kubelet[2666]: I0113 21:25:46.404148 2666 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:25:46.404338 kubelet[2666]: I0113 21:25:46.404181 2666 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:25:46.404338 kubelet[2666]: I0113 21:25:46.404314 2666 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:46.404499 kubelet[2666]: I0113 21:25:46.404490 2666 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:25:46.404551 kubelet[2666]: I0113 21:25:46.404513 2666 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:25:46.404602 kubelet[2666]: I0113 21:25:46.404555 2666 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:25:46.404602 kubelet[2666]: I0113 21:25:46.404582 2666 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:25:46.410763 kubelet[2666]: I0113 21:25:46.409382 2666 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:25:46.410763 kubelet[2666]: I0113 21:25:46.409756 2666 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:25:46.410763 kubelet[2666]: I0113 21:25:46.410498 2666 server.go:1264] "Started kubelet" Jan 13 21:25:46.418078 kubelet[2666]: I0113 21:25:46.418044 2666 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:25:46.429345 kubelet[2666]: I0113 21:25:46.429307 2666 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:25:46.432604 kubelet[2666]: I0113 21:25:46.431930 2666 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:25:46.438814 kubelet[2666]: I0113 21:25:46.438293 2666 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:25:46.447672 kubelet[2666]: I0113 21:25:46.439311 2666 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:25:46.448320 kubelet[2666]: I0113 21:25:46.448296 2666 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:25:46.455673 kubelet[2666]: I0113 21:25:46.455615 2666 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:25:46.456598 kubelet[2666]: I0113 21:25:46.456254 2666 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:25:46.464756 kubelet[2666]: I0113 21:25:46.463637 2666 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:25:46.466225 kubelet[2666]: I0113 21:25:46.466179 2666 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:25:46.466381 kubelet[2666]: I0113 21:25:46.466238 2666 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:25:46.466381 kubelet[2666]: I0113 21:25:46.466274 2666 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:25:46.466381 kubelet[2666]: E0113 21:25:46.466348 2666 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:25:46.473732 kubelet[2666]: I0113 21:25:46.471815 2666 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:25:46.475815 kubelet[2666]: I0113 21:25:46.474941 2666 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:25:46.485450 kubelet[2666]: E0113 21:25:46.484332 2666 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:25:46.490380 kubelet[2666]: I0113 21:25:46.490333 2666 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:25:46.544013 kubelet[2666]: I0113 21:25:46.541519 2666 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.560219 kubelet[2666]: I0113 21:25:46.560167 2666 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.560446 kubelet[2666]: I0113 21:25:46.560295 2666 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.572024 kubelet[2666]: E0113 21:25:46.571981 2666 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:25:46.587552 kubelet[2666]: I0113 21:25:46.584346 2666 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:25:46.587552 kubelet[2666]: I0113 21:25:46.584373 2666 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:25:46.587552 kubelet[2666]: I0113 21:25:46.584406 2666 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:46.587552 kubelet[2666]: I0113 21:25:46.584736 2666 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:25:46.587552 kubelet[2666]: I0113 21:25:46.584755 2666 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:25:46.587552 kubelet[2666]: I0113 21:25:46.584824 2666 policy_none.go:49] "None policy: Start" Jan 13 21:25:46.592677 kubelet[2666]: I0113 21:25:46.592642 2666 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:25:46.592979 kubelet[2666]: I0113 21:25:46.592965 2666 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:25:46.593747 kubelet[2666]: I0113 21:25:46.593722 2666 state_mem.go:75] "Updated machine memory state" Jan 13 21:25:46.605820 kubelet[2666]: I0113 21:25:46.605460 2666 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:25:46.605820 kubelet[2666]: I0113 21:25:46.605827 2666 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:25:46.609113 kubelet[2666]: I0113 21:25:46.607722 2666 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:25:46.773100 kubelet[2666]: I0113 21:25:46.772997 2666 topology_manager.go:215] "Topology Admit Handler" podUID="4ca76b05aa2fee9f9b77f7c193ee6656" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.773380 kubelet[2666]: I0113 21:25:46.773215 2666 topology_manager.go:215] "Topology Admit Handler" podUID="5c313c970a262743066dd57bcd61ede6" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.773380 kubelet[2666]: I0113 21:25:46.773338 2666 topology_manager.go:215] "Topology Admit Handler" podUID="4d2b7920b7ed7b8204684dea90ec770d" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.785355 kubelet[2666]: W0113 21:25:46.784071 2666 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:25:46.785355 kubelet[2666]: W0113 21:25:46.784449 2666 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:25:46.785355 kubelet[2666]: E0113 21:25:46.784520 2666 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.786143 kubelet[2666]: W0113 21:25:46.786076 2666 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:25:46.786295 kubelet[2666]: E0113 21:25:46.786168 2666 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.859765 kubelet[2666]: I0113 21:25:46.859154 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d2b7920b7ed7b8204684dea90ec770d-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4d2b7920b7ed7b8204684dea90ec770d\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.859765 kubelet[2666]: I0113 21:25:46.859247 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d2b7920b7ed7b8204684dea90ec770d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4d2b7920b7ed7b8204684dea90ec770d\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.859765 kubelet[2666]: I0113 21:25:46.859287 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4ca76b05aa2fee9f9b77f7c193ee6656-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4ca76b05aa2fee9f9b77f7c193ee6656\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.859765 kubelet[2666]: I0113 21:25:46.859320 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ca76b05aa2fee9f9b77f7c193ee6656-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4ca76b05aa2fee9f9b77f7c193ee6656\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.860155 kubelet[2666]: I0113 21:25:46.859353 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d2b7920b7ed7b8204684dea90ec770d-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4d2b7920b7ed7b8204684dea90ec770d\") " pod="kube-system/kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.860155 kubelet[2666]: I0113 21:25:46.859386 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ca76b05aa2fee9f9b77f7c193ee6656-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4ca76b05aa2fee9f9b77f7c193ee6656\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.860155 kubelet[2666]: I0113 21:25:46.859421 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ca76b05aa2fee9f9b77f7c193ee6656-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4ca76b05aa2fee9f9b77f7c193ee6656\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.860155 kubelet[2666]: I0113 21:25:46.859733 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ca76b05aa2fee9f9b77f7c193ee6656-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"4ca76b05aa2fee9f9b77f7c193ee6656\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:46.860383 kubelet[2666]: I0113 21:25:46.859822 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c313c970a262743066dd57bcd61ede6-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal\" (UID: \"5c313c970a262743066dd57bcd61ede6\") " pod="kube-system/kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:25:47.408366 kubelet[2666]: I0113 21:25:47.406075 2666 apiserver.go:52] "Watching apiserver" Jan 13 21:25:47.458865 kubelet[2666]: I0113 21:25:47.457919 2666 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:25:47.598564 kubelet[2666]: I0113 21:25:47.598469 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" podStartSLOduration=2.5984400279999997 podStartE2EDuration="2.598440028s" podCreationTimestamp="2025-01-13 21:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:47.579332502 +0000 UTC m=+1.287187625" watchObservedRunningTime="2025-01-13 21:25:47.598440028 +0000 UTC m=+1.306295151" Jan 13 21:25:47.618866 kubelet[2666]: I0113 21:25:47.618520 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" podStartSLOduration=1.6184891989999999 podStartE2EDuration="1.618489199s" podCreationTimestamp="2025-01-13 21:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:47.617340831 +0000 UTC m=+1.325195955" watchObservedRunningTime="2025-01-13 21:25:47.618489199 +0000 UTC m=+1.326344323" Jan 13 21:25:47.618866 kubelet[2666]: I0113 21:25:47.618689 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" podStartSLOduration=3.618673506 podStartE2EDuration="3.618673506s" podCreationTimestamp="2025-01-13 21:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:47.599667275 +0000 UTC m=+1.307522414" watchObservedRunningTime="2025-01-13 21:25:47.618673506 +0000 UTC m=+1.326528628" Jan 13 21:25:52.619925 sudo[1737]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:52.663840 sshd[1734]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:52.668865 systemd[1]: sshd@8-10.128.0.101:22-147.75.109.163:42298.service: Deactivated successfully. Jan 13 21:25:52.672154 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:25:52.672478 systemd[1]: session-9.scope: Consumed 6.727s CPU time, 199.9M memory peak, 0B memory swap peak. Jan 13 21:25:52.674420 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:25:52.676173 systemd-logind[1439]: Removed session 9. Jan 13 21:26:00.239640 kubelet[2666]: I0113 21:26:00.238977 2666 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:26:00.241869 containerd[1459]: time="2025-01-13T21:26:00.240757946Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:26:00.243047 kubelet[2666]: I0113 21:26:00.241968 2666 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:26:01.124457 kubelet[2666]: I0113 21:26:01.124389 2666 topology_manager.go:215] "Topology Admit Handler" podUID="31f2dffb-fe2e-43ed-8c30-ed82077bfa8c" podNamespace="kube-system" podName="kube-proxy-4zkdl" Jan 13 21:26:01.144676 systemd[1]: Created slice kubepods-besteffort-pod31f2dffb_fe2e_43ed_8c30_ed82077bfa8c.slice - libcontainer container kubepods-besteffort-pod31f2dffb_fe2e_43ed_8c30_ed82077bfa8c.slice. Jan 13 21:26:01.160312 kubelet[2666]: I0113 21:26:01.160266 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31f2dffb-fe2e-43ed-8c30-ed82077bfa8c-kube-proxy\") pod \"kube-proxy-4zkdl\" (UID: \"31f2dffb-fe2e-43ed-8c30-ed82077bfa8c\") " pod="kube-system/kube-proxy-4zkdl" Jan 13 21:26:01.160312 kubelet[2666]: I0113 21:26:01.160320 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pq5w\" (UniqueName: \"kubernetes.io/projected/31f2dffb-fe2e-43ed-8c30-ed82077bfa8c-kube-api-access-7pq5w\") pod \"kube-proxy-4zkdl\" (UID: \"31f2dffb-fe2e-43ed-8c30-ed82077bfa8c\") " pod="kube-system/kube-proxy-4zkdl" Jan 13 21:26:01.160312 kubelet[2666]: I0113 21:26:01.160357 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31f2dffb-fe2e-43ed-8c30-ed82077bfa8c-xtables-lock\") pod \"kube-proxy-4zkdl\" (UID: \"31f2dffb-fe2e-43ed-8c30-ed82077bfa8c\") " pod="kube-system/kube-proxy-4zkdl" Jan 13 21:26:01.160312 kubelet[2666]: I0113 21:26:01.160386 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f2dffb-fe2e-43ed-8c30-ed82077bfa8c-lib-modules\") pod \"kube-proxy-4zkdl\" (UID: \"31f2dffb-fe2e-43ed-8c30-ed82077bfa8c\") " pod="kube-system/kube-proxy-4zkdl" Jan 13 21:26:01.258728 kubelet[2666]: I0113 21:26:01.258643 2666 topology_manager.go:215] "Topology Admit Handler" podUID="0a64bdf7-fef9-4ace-a94c-a34d2f14560a" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-fg8ss" Jan 13 21:26:01.268554 kubelet[2666]: W0113 21:26:01.267967 2666 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal' and this object Jan 13 21:26:01.268554 kubelet[2666]: E0113 21:26:01.268024 2666 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal' and this object Jan 13 21:26:01.272440 systemd[1]: Created slice kubepods-besteffort-pod0a64bdf7_fef9_4ace_a94c_a34d2f14560a.slice - libcontainer container kubepods-besteffort-pod0a64bdf7_fef9_4ace_a94c_a34d2f14560a.slice. Jan 13 21:26:01.361960 kubelet[2666]: I0113 21:26:01.361880 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a64bdf7-fef9-4ace-a94c-a34d2f14560a-var-lib-calico\") pod \"tigera-operator-7bc55997bb-fg8ss\" (UID: \"0a64bdf7-fef9-4ace-a94c-a34d2f14560a\") " pod="tigera-operator/tigera-operator-7bc55997bb-fg8ss" Jan 13 21:26:01.361960 kubelet[2666]: I0113 21:26:01.361954 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frm97\" (UniqueName: \"kubernetes.io/projected/0a64bdf7-fef9-4ace-a94c-a34d2f14560a-kube-api-access-frm97\") pod \"tigera-operator-7bc55997bb-fg8ss\" (UID: \"0a64bdf7-fef9-4ace-a94c-a34d2f14560a\") " pod="tigera-operator/tigera-operator-7bc55997bb-fg8ss" Jan 13 21:26:01.455048 containerd[1459]: time="2025-01-13T21:26:01.454884164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zkdl,Uid:31f2dffb-fe2e-43ed-8c30-ed82077bfa8c,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:01.496565 containerd[1459]: time="2025-01-13T21:26:01.496353470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:01.496924 containerd[1459]: time="2025-01-13T21:26:01.496575989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:01.496924 containerd[1459]: time="2025-01-13T21:26:01.496629012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:01.496924 containerd[1459]: time="2025-01-13T21:26:01.496830206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:01.531120 systemd[1]: Started cri-containerd-4a62345b08cf35774902b1aab7990297e76811a30d0ebd3270afa563f68daa22.scope - libcontainer container 4a62345b08cf35774902b1aab7990297e76811a30d0ebd3270afa563f68daa22. Jan 13 21:26:01.565393 containerd[1459]: time="2025-01-13T21:26:01.565305355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zkdl,Uid:31f2dffb-fe2e-43ed-8c30-ed82077bfa8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a62345b08cf35774902b1aab7990297e76811a30d0ebd3270afa563f68daa22\"" Jan 13 21:26:01.571028 containerd[1459]: time="2025-01-13T21:26:01.570949049Z" level=info msg="CreateContainer within sandbox \"4a62345b08cf35774902b1aab7990297e76811a30d0ebd3270afa563f68daa22\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:26:01.594557 containerd[1459]: time="2025-01-13T21:26:01.594498421Z" level=info msg="CreateContainer within sandbox \"4a62345b08cf35774902b1aab7990297e76811a30d0ebd3270afa563f68daa22\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ae2304d3f7ed035c08e360fcc1734cc43a2d6294c63bdffabaf0ec03a27e94b8\"" Jan 13 21:26:01.595357 containerd[1459]: time="2025-01-13T21:26:01.595321224Z" level=info msg="StartContainer for \"ae2304d3f7ed035c08e360fcc1734cc43a2d6294c63bdffabaf0ec03a27e94b8\"" Jan 13 21:26:01.632147 systemd[1]: Started cri-containerd-ae2304d3f7ed035c08e360fcc1734cc43a2d6294c63bdffabaf0ec03a27e94b8.scope - libcontainer container ae2304d3f7ed035c08e360fcc1734cc43a2d6294c63bdffabaf0ec03a27e94b8. Jan 13 21:26:01.671385 containerd[1459]: time="2025-01-13T21:26:01.671132049Z" level=info msg="StartContainer for \"ae2304d3f7ed035c08e360fcc1734cc43a2d6294c63bdffabaf0ec03a27e94b8\" returns successfully" Jan 13 21:26:02.179078 containerd[1459]: time="2025-01-13T21:26:02.179002489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-fg8ss,Uid:0a64bdf7-fef9-4ace-a94c-a34d2f14560a,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:26:02.220159 containerd[1459]: time="2025-01-13T21:26:02.219637856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:02.220159 containerd[1459]: time="2025-01-13T21:26:02.219766406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:02.220159 containerd[1459]: time="2025-01-13T21:26:02.219809812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:02.220159 containerd[1459]: time="2025-01-13T21:26:02.220008868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:02.252068 systemd[1]: Started cri-containerd-e6fd0099429f742eebea4f78f7a947b43b79e42ad9deece517dd41297e4fca01.scope - libcontainer container e6fd0099429f742eebea4f78f7a947b43b79e42ad9deece517dd41297e4fca01. Jan 13 21:26:02.302655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4910575.mount: Deactivated successfully. Jan 13 21:26:02.332303 containerd[1459]: time="2025-01-13T21:26:02.331267260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-fg8ss,Uid:0a64bdf7-fef9-4ace-a94c-a34d2f14560a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e6fd0099429f742eebea4f78f7a947b43b79e42ad9deece517dd41297e4fca01\"" Jan 13 21:26:02.337523 containerd[1459]: time="2025-01-13T21:26:02.336933724Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:26:02.583344 kubelet[2666]: I0113 21:26:02.583236 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4zkdl" podStartSLOduration=1.583204234 podStartE2EDuration="1.583204234s" podCreationTimestamp="2025-01-13 21:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:26:02.582986573 +0000 UTC m=+16.290841696" watchObservedRunningTime="2025-01-13 21:26:02.583204234 +0000 UTC m=+16.291059357" Jan 13 21:26:07.989862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204458074.mount: Deactivated successfully. Jan 13 21:26:08.738253 containerd[1459]: time="2025-01-13T21:26:08.738188861Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:08.739684 containerd[1459]: time="2025-01-13T21:26:08.739611855Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764313" Jan 13 21:26:08.741365 containerd[1459]: time="2025-01-13T21:26:08.741298109Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:08.744935 containerd[1459]: time="2025-01-13T21:26:08.744883317Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:08.746307 containerd[1459]: time="2025-01-13T21:26:08.746119417Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 6.409101414s" Jan 13 21:26:08.746307 containerd[1459]: time="2025-01-13T21:26:08.746170289Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 21:26:08.750139 containerd[1459]: time="2025-01-13T21:26:08.750088239Z" level=info msg="CreateContainer within sandbox \"e6fd0099429f742eebea4f78f7a947b43b79e42ad9deece517dd41297e4fca01\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:26:08.771713 containerd[1459]: time="2025-01-13T21:26:08.771651254Z" level=info msg="CreateContainer within sandbox \"e6fd0099429f742eebea4f78f7a947b43b79e42ad9deece517dd41297e4fca01\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5a664a2d34ea428734ada636ec72987dbb80429656893c23ef97babe1abfdb2c\"" Jan 13 21:26:08.773108 containerd[1459]: time="2025-01-13T21:26:08.773071408Z" level=info msg="StartContainer for \"5a664a2d34ea428734ada636ec72987dbb80429656893c23ef97babe1abfdb2c\"" Jan 13 21:26:08.819989 systemd[1]: Started cri-containerd-5a664a2d34ea428734ada636ec72987dbb80429656893c23ef97babe1abfdb2c.scope - libcontainer container 5a664a2d34ea428734ada636ec72987dbb80429656893c23ef97babe1abfdb2c. Jan 13 21:26:08.858571 containerd[1459]: time="2025-01-13T21:26:08.858519296Z" level=info msg="StartContainer for \"5a664a2d34ea428734ada636ec72987dbb80429656893c23ef97babe1abfdb2c\" returns successfully" Jan 13 21:26:12.038189 kubelet[2666]: I0113 21:26:12.038111 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-fg8ss" podStartSLOduration=4.626073761 podStartE2EDuration="11.038082502s" podCreationTimestamp="2025-01-13 21:26:01 +0000 UTC" firstStartedPulling="2025-01-13 21:26:02.336033265 +0000 UTC m=+16.043888375" lastFinishedPulling="2025-01-13 21:26:08.748042016 +0000 UTC m=+22.455897116" observedRunningTime="2025-01-13 21:26:09.599115817 +0000 UTC m=+23.306970938" watchObservedRunningTime="2025-01-13 21:26:12.038082502 +0000 UTC m=+25.745937629" Jan 13 21:26:12.038898 kubelet[2666]: I0113 21:26:12.038300 2666 topology_manager.go:215] "Topology Admit Handler" podUID="6ca4e421-62fc-4899-9bbf-17ea84ea9561" podNamespace="calico-system" podName="calico-typha-c6f6b644c-xmx58" Jan 13 21:26:12.052331 systemd[1]: Created slice kubepods-besteffort-pod6ca4e421_62fc_4899_9bbf_17ea84ea9561.slice - libcontainer container kubepods-besteffort-pod6ca4e421_62fc_4899_9bbf_17ea84ea9561.slice. Jan 13 21:26:12.142776 kubelet[2666]: I0113 21:26:12.142719 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6ca4e421-62fc-4899-9bbf-17ea84ea9561-typha-certs\") pod \"calico-typha-c6f6b644c-xmx58\" (UID: \"6ca4e421-62fc-4899-9bbf-17ea84ea9561\") " pod="calico-system/calico-typha-c6f6b644c-xmx58" Jan 13 21:26:12.142776 kubelet[2666]: I0113 21:26:12.142784 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlvk\" (UniqueName: \"kubernetes.io/projected/6ca4e421-62fc-4899-9bbf-17ea84ea9561-kube-api-access-qmlvk\") pod \"calico-typha-c6f6b644c-xmx58\" (UID: \"6ca4e421-62fc-4899-9bbf-17ea84ea9561\") " pod="calico-system/calico-typha-c6f6b644c-xmx58" Jan 13 21:26:12.143077 kubelet[2666]: I0113 21:26:12.142824 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca4e421-62fc-4899-9bbf-17ea84ea9561-tigera-ca-bundle\") pod \"calico-typha-c6f6b644c-xmx58\" (UID: \"6ca4e421-62fc-4899-9bbf-17ea84ea9561\") " pod="calico-system/calico-typha-c6f6b644c-xmx58" Jan 13 21:26:12.153550 kubelet[2666]: I0113 21:26:12.153495 2666 topology_manager.go:215] "Topology Admit Handler" podUID="ab4d7150-c7c3-434c-8388-c2a756a129da" podNamespace="calico-system" podName="calico-node-7fcj5" Jan 13 21:26:12.166049 systemd[1]: Created slice kubepods-besteffort-podab4d7150_c7c3_434c_8388_c2a756a129da.slice - libcontainer container kubepods-besteffort-podab4d7150_c7c3_434c_8388_c2a756a129da.slice. Jan 13 21:26:12.243376 kubelet[2666]: I0113 21:26:12.243321 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ab4d7150-c7c3-434c-8388-c2a756a129da-node-certs\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.243376 kubelet[2666]: I0113 21:26:12.243383 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab4d7150-c7c3-434c-8388-c2a756a129da-var-lib-calico\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.243632 kubelet[2666]: I0113 21:26:12.243431 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab4d7150-c7c3-434c-8388-c2a756a129da-lib-modules\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.243632 kubelet[2666]: I0113 21:26:12.243454 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab4d7150-c7c3-434c-8388-c2a756a129da-tigera-ca-bundle\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.243632 kubelet[2666]: I0113 21:26:12.243480 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ab4d7150-c7c3-434c-8388-c2a756a129da-var-run-calico\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.243632 kubelet[2666]: I0113 21:26:12.243508 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ab4d7150-c7c3-434c-8388-c2a756a129da-policysync\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.243632 kubelet[2666]: I0113 21:26:12.243532 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srbtn\" (UniqueName: \"kubernetes.io/projected/ab4d7150-c7c3-434c-8388-c2a756a129da-kube-api-access-srbtn\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.244426 kubelet[2666]: I0113 21:26:12.243561 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab4d7150-c7c3-434c-8388-c2a756a129da-xtables-lock\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.244426 kubelet[2666]: I0113 21:26:12.243589 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ab4d7150-c7c3-434c-8388-c2a756a129da-flexvol-driver-host\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.244426 kubelet[2666]: I0113 21:26:12.243618 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ab4d7150-c7c3-434c-8388-c2a756a129da-cni-bin-dir\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.244426 kubelet[2666]: I0113 21:26:12.243645 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ab4d7150-c7c3-434c-8388-c2a756a129da-cni-net-dir\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.244426 kubelet[2666]: I0113 21:26:12.243716 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ab4d7150-c7c3-434c-8388-c2a756a129da-cni-log-dir\") pod \"calico-node-7fcj5\" (UID: \"ab4d7150-c7c3-434c-8388-c2a756a129da\") " pod="calico-system/calico-node-7fcj5" Jan 13 21:26:12.325623 kubelet[2666]: I0113 21:26:12.324081 2666 topology_manager.go:215] "Topology Admit Handler" podUID="c6337092-d429-49f9-9c09-de05379de9a5" podNamespace="calico-system" podName="csi-node-driver-brgw4" Jan 13 21:26:12.325623 kubelet[2666]: E0113 21:26:12.324449 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-brgw4" podUID="c6337092-d429-49f9-9c09-de05379de9a5" Jan 13 21:26:12.344475 kubelet[2666]: I0113 21:26:12.344420 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6337092-d429-49f9-9c09-de05379de9a5-kubelet-dir\") pod \"csi-node-driver-brgw4\" (UID: \"c6337092-d429-49f9-9c09-de05379de9a5\") " pod="calico-system/csi-node-driver-brgw4" Jan 13 21:26:12.344475 kubelet[2666]: I0113 21:26:12.344481 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c6337092-d429-49f9-9c09-de05379de9a5-socket-dir\") pod \"csi-node-driver-brgw4\" (UID: \"c6337092-d429-49f9-9c09-de05379de9a5\") " pod="calico-system/csi-node-driver-brgw4" Jan 13 21:26:12.344748 kubelet[2666]: I0113 21:26:12.344613 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwhx9\" (UniqueName: \"kubernetes.io/projected/c6337092-d429-49f9-9c09-de05379de9a5-kube-api-access-fwhx9\") pod \"csi-node-driver-brgw4\" (UID: \"c6337092-d429-49f9-9c09-de05379de9a5\") " pod="calico-system/csi-node-driver-brgw4" Jan 13 21:26:12.344748 kubelet[2666]: I0113 21:26:12.344658 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c6337092-d429-49f9-9c09-de05379de9a5-registration-dir\") pod \"csi-node-driver-brgw4\" (UID: \"c6337092-d429-49f9-9c09-de05379de9a5\") " pod="calico-system/csi-node-driver-brgw4" Jan 13 21:26:12.344748 kubelet[2666]: I0113 21:26:12.344725 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c6337092-d429-49f9-9c09-de05379de9a5-varrun\") pod \"csi-node-driver-brgw4\" (UID: \"c6337092-d429-49f9-9c09-de05379de9a5\") " pod="calico-system/csi-node-driver-brgw4" Jan 13 21:26:12.349356 kubelet[2666]: E0113 21:26:12.349313 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.356716 kubelet[2666]: W0113 21:26:12.351743 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.356716 kubelet[2666]: E0113 21:26:12.352137 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.356716 kubelet[2666]: E0113 21:26:12.353119 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.356716 kubelet[2666]: W0113 21:26:12.353153 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.356716 kubelet[2666]: E0113 21:26:12.353242 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.359234 kubelet[2666]: E0113 21:26:12.359210 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.359506 kubelet[2666]: W0113 21:26:12.359368 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.359793 kubelet[2666]: E0113 21:26:12.359744 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.361978 containerd[1459]: time="2025-01-13T21:26:12.361144241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c6f6b644c-xmx58,Uid:6ca4e421-62fc-4899-9bbf-17ea84ea9561,Namespace:calico-system,Attempt:0,}" Jan 13 21:26:12.363008 kubelet[2666]: E0113 21:26:12.361384 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.363008 kubelet[2666]: W0113 21:26:12.362822 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.363008 kubelet[2666]: E0113 21:26:12.362962 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.364801 kubelet[2666]: E0113 21:26:12.364352 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.364801 kubelet[2666]: W0113 21:26:12.364377 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.364801 kubelet[2666]: E0113 21:26:12.364543 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.365876 kubelet[2666]: E0113 21:26:12.365724 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.365876 kubelet[2666]: W0113 21:26:12.365741 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.366722 kubelet[2666]: E0113 21:26:12.366484 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.368735 kubelet[2666]: E0113 21:26:12.367742 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.368735 kubelet[2666]: W0113 21:26:12.367762 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.369113 kubelet[2666]: E0113 21:26:12.368905 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.369367 kubelet[2666]: E0113 21:26:12.369349 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.369581 kubelet[2666]: W0113 21:26:12.369483 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.369917 kubelet[2666]: E0113 21:26:12.369870 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.371285 kubelet[2666]: E0113 21:26:12.370802 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.371285 kubelet[2666]: W0113 21:26:12.370820 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.371285 kubelet[2666]: E0113 21:26:12.370844 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.372813 kubelet[2666]: E0113 21:26:12.372791 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.373019 kubelet[2666]: W0113 21:26:12.372956 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.373019 kubelet[2666]: E0113 21:26:12.372985 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.410645 kubelet[2666]: E0113 21:26:12.410615 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.410901 kubelet[2666]: W0113 21:26:12.410730 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.410901 kubelet[2666]: E0113 21:26:12.410763 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.445967 kubelet[2666]: E0113 21:26:12.445933 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.447921 kubelet[2666]: W0113 21:26:12.447736 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.447921 kubelet[2666]: E0113 21:26:12.447785 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.448127 containerd[1459]: time="2025-01-13T21:26:12.445261645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:12.448127 containerd[1459]: time="2025-01-13T21:26:12.445343211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:12.448127 containerd[1459]: time="2025-01-13T21:26:12.445369351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:12.448127 containerd[1459]: time="2025-01-13T21:26:12.445495558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:12.449099 kubelet[2666]: E0113 21:26:12.448720 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.449099 kubelet[2666]: W0113 21:26:12.448743 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.449099 kubelet[2666]: E0113 21:26:12.448775 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.450971 kubelet[2666]: E0113 21:26:12.449866 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.450971 kubelet[2666]: W0113 21:26:12.449885 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.450971 kubelet[2666]: E0113 21:26:12.450891 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.452802 kubelet[2666]: E0113 21:26:12.452778 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.452906 kubelet[2666]: W0113 21:26:12.452892 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.453026 kubelet[2666]: E0113 21:26:12.453009 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.454001 kubelet[2666]: E0113 21:26:12.453960 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.454001 kubelet[2666]: W0113 21:26:12.453978 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.454210 kubelet[2666]: E0113 21:26:12.454191 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.455665 kubelet[2666]: E0113 21:26:12.455456 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.455665 kubelet[2666]: W0113 21:26:12.455476 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.455665 kubelet[2666]: E0113 21:26:12.455500 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.456922 kubelet[2666]: E0113 21:26:12.456225 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.456922 kubelet[2666]: W0113 21:26:12.456245 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.456922 kubelet[2666]: E0113 21:26:12.456809 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.462104 kubelet[2666]: E0113 21:26:12.460801 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.462104 kubelet[2666]: W0113 21:26:12.460824 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.462104 kubelet[2666]: E0113 21:26:12.460865 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.462104 kubelet[2666]: E0113 21:26:12.461193 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.462104 kubelet[2666]: W0113 21:26:12.461208 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.462104 kubelet[2666]: E0113 21:26:12.461289 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.462104 kubelet[2666]: E0113 21:26:12.461566 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.462104 kubelet[2666]: W0113 21:26:12.461582 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.462104 kubelet[2666]: E0113 21:26:12.461686 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.462104 kubelet[2666]: E0113 21:26:12.462033 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.462652 kubelet[2666]: W0113 21:26:12.462044 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.462652 kubelet[2666]: E0113 21:26:12.462158 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.464323 kubelet[2666]: E0113 21:26:12.463786 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.464323 kubelet[2666]: W0113 21:26:12.463809 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.464323 kubelet[2666]: E0113 21:26:12.463968 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.464323 kubelet[2666]: E0113 21:26:12.464226 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.464323 kubelet[2666]: W0113 21:26:12.464240 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.464629 kubelet[2666]: E0113 21:26:12.464337 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.464629 kubelet[2666]: E0113 21:26:12.464586 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.464629 kubelet[2666]: W0113 21:26:12.464598 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.465210 kubelet[2666]: E0113 21:26:12.464792 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.466057 kubelet[2666]: E0113 21:26:12.465856 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.466057 kubelet[2666]: W0113 21:26:12.465874 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.466057 kubelet[2666]: E0113 21:26:12.466000 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.469306 kubelet[2666]: E0113 21:26:12.469175 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.469306 kubelet[2666]: W0113 21:26:12.469194 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.473866 kubelet[2666]: E0113 21:26:12.472753 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.473866 kubelet[2666]: W0113 21:26:12.472771 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.473866 kubelet[2666]: E0113 21:26:12.473277 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.473866 kubelet[2666]: E0113 21:26:12.473298 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.476967 kubelet[2666]: E0113 21:26:12.476785 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.476967 kubelet[2666]: W0113 21:26:12.476806 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.476967 kubelet[2666]: E0113 21:26:12.476926 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.477592 kubelet[2666]: E0113 21:26:12.477412 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.477592 kubelet[2666]: W0113 21:26:12.477432 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.478575 kubelet[2666]: E0113 21:26:12.477670 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.478575 kubelet[2666]: E0113 21:26:12.478270 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.478575 kubelet[2666]: W0113 21:26:12.478287 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.478782 containerd[1459]: time="2025-01-13T21:26:12.477838399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7fcj5,Uid:ab4d7150-c7c3-434c-8388-c2a756a129da,Namespace:calico-system,Attempt:0,}" Jan 13 21:26:12.478921 kubelet[2666]: E0113 21:26:12.478872 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.479497 kubelet[2666]: E0113 21:26:12.479320 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.479497 kubelet[2666]: W0113 21:26:12.479341 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.479497 kubelet[2666]: E0113 21:26:12.479497 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.480441 kubelet[2666]: E0113 21:26:12.480411 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.480441 kubelet[2666]: W0113 21:26:12.480436 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.480934 kubelet[2666]: E0113 21:26:12.480912 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.483539 kubelet[2666]: E0113 21:26:12.483517 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.483539 kubelet[2666]: W0113 21:26:12.483538 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.483796 kubelet[2666]: E0113 21:26:12.483648 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.484289 kubelet[2666]: E0113 21:26:12.484268 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.484289 kubelet[2666]: W0113 21:26:12.484288 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.484857 kubelet[2666]: E0113 21:26:12.484681 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.486559 kubelet[2666]: E0113 21:26:12.486130 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.486559 kubelet[2666]: W0113 21:26:12.486154 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.486559 kubelet[2666]: E0113 21:26:12.486172 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.502992 systemd[1]: Started cri-containerd-dd27a1b0865809c186c770b855953f59e169eb039b41acf5c38b6ef538f5bc75.scope - libcontainer container dd27a1b0865809c186c770b855953f59e169eb039b41acf5c38b6ef538f5bc75. Jan 13 21:26:12.514871 kubelet[2666]: E0113 21:26:12.514662 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:12.515839 kubelet[2666]: W0113 21:26:12.514689 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:12.515839 kubelet[2666]: E0113 21:26:12.515781 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:12.555400 containerd[1459]: time="2025-01-13T21:26:12.554883830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:12.555400 containerd[1459]: time="2025-01-13T21:26:12.554955696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:12.555400 containerd[1459]: time="2025-01-13T21:26:12.554975443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:12.555400 containerd[1459]: time="2025-01-13T21:26:12.555102749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:12.612907 systemd[1]: Started cri-containerd-c1671bba9ad9de2be0276dae2c9e4cb8f98b3e44b3b3d2e9affef2771e5239fc.scope - libcontainer container c1671bba9ad9de2be0276dae2c9e4cb8f98b3e44b3b3d2e9affef2771e5239fc. Jan 13 21:26:12.702515 containerd[1459]: time="2025-01-13T21:26:12.701909201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c6f6b644c-xmx58,Uid:6ca4e421-62fc-4899-9bbf-17ea84ea9561,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd27a1b0865809c186c770b855953f59e169eb039b41acf5c38b6ef538f5bc75\"" Jan 13 21:26:12.706434 containerd[1459]: time="2025-01-13T21:26:12.706387383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:26:12.730433 containerd[1459]: time="2025-01-13T21:26:12.730333047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7fcj5,Uid:ab4d7150-c7c3-434c-8388-c2a756a129da,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1671bba9ad9de2be0276dae2c9e4cb8f98b3e44b3b3d2e9affef2771e5239fc\"" Jan 13 21:26:13.770294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1504955026.mount: Deactivated successfully. Jan 13 21:26:14.468124 kubelet[2666]: E0113 21:26:14.468036 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-brgw4" podUID="c6337092-d429-49f9-9c09-de05379de9a5" Jan 13 21:26:14.680917 containerd[1459]: time="2025-01-13T21:26:14.680824688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:14.682432 containerd[1459]: time="2025-01-13T21:26:14.682325261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 21:26:14.684274 containerd[1459]: time="2025-01-13T21:26:14.684199286Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:14.687670 containerd[1459]: time="2025-01-13T21:26:14.687586581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:14.688841 containerd[1459]: time="2025-01-13T21:26:14.688791970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.982308703s" Jan 13 21:26:14.688977 containerd[1459]: time="2025-01-13T21:26:14.688852594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:26:14.692322 containerd[1459]: time="2025-01-13T21:26:14.692235034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:26:14.721826 containerd[1459]: time="2025-01-13T21:26:14.721559278Z" level=info msg="CreateContainer within sandbox \"dd27a1b0865809c186c770b855953f59e169eb039b41acf5c38b6ef538f5bc75\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:26:14.743465 containerd[1459]: time="2025-01-13T21:26:14.743368565Z" level=info msg="CreateContainer within sandbox \"dd27a1b0865809c186c770b855953f59e169eb039b41acf5c38b6ef538f5bc75\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"67cde074c77892cae2ca8d57a9c77f9e9373c70aef8204d941cbef07d5af3657\"" Jan 13 21:26:14.744669 containerd[1459]: time="2025-01-13T21:26:14.744479502Z" level=info msg="StartContainer for \"67cde074c77892cae2ca8d57a9c77f9e9373c70aef8204d941cbef07d5af3657\"" Jan 13 21:26:14.798990 systemd[1]: Started cri-containerd-67cde074c77892cae2ca8d57a9c77f9e9373c70aef8204d941cbef07d5af3657.scope - libcontainer container 67cde074c77892cae2ca8d57a9c77f9e9373c70aef8204d941cbef07d5af3657. Jan 13 21:26:14.867838 containerd[1459]: time="2025-01-13T21:26:14.867591550Z" level=info msg="StartContainer for \"67cde074c77892cae2ca8d57a9c77f9e9373c70aef8204d941cbef07d5af3657\" returns successfully" Jan 13 21:26:15.634709 kubelet[2666]: I0113 21:26:15.633874 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c6f6b644c-xmx58" podStartSLOduration=1.648390446 podStartE2EDuration="3.633845692s" podCreationTimestamp="2025-01-13 21:26:12 +0000 UTC" firstStartedPulling="2025-01-13 21:26:12.70502229 +0000 UTC m=+26.412877403" lastFinishedPulling="2025-01-13 21:26:14.690477537 +0000 UTC m=+28.398332649" observedRunningTime="2025-01-13 21:26:15.629736376 +0000 UTC m=+29.337591498" watchObservedRunningTime="2025-01-13 21:26:15.633845692 +0000 UTC m=+29.341700829" Jan 13 21:26:15.653953 kubelet[2666]: E0113 21:26:15.653910 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.653953 kubelet[2666]: W0113 21:26:15.653946 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.654529 kubelet[2666]: E0113 21:26:15.653975 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.654932 kubelet[2666]: E0113 21:26:15.654909 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.654932 kubelet[2666]: W0113 21:26:15.654932 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.655586 kubelet[2666]: E0113 21:26:15.654953 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.656214 kubelet[2666]: E0113 21:26:15.656063 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.656214 kubelet[2666]: W0113 21:26:15.656083 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.656214 kubelet[2666]: E0113 21:26:15.656105 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.656761 kubelet[2666]: E0113 21:26:15.656646 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.656761 kubelet[2666]: W0113 21:26:15.656666 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.656761 kubelet[2666]: E0113 21:26:15.656685 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.657342 kubelet[2666]: E0113 21:26:15.657065 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.657342 kubelet[2666]: W0113 21:26:15.657083 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.657342 kubelet[2666]: E0113 21:26:15.657100 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.658168 kubelet[2666]: E0113 21:26:15.657446 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.658168 kubelet[2666]: W0113 21:26:15.657459 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.658168 kubelet[2666]: E0113 21:26:15.657475 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.658168 kubelet[2666]: E0113 21:26:15.657887 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.658168 kubelet[2666]: W0113 21:26:15.657903 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.658168 kubelet[2666]: E0113 21:26:15.657920 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.658590 kubelet[2666]: E0113 21:26:15.658195 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.658590 kubelet[2666]: W0113 21:26:15.658208 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.658590 kubelet[2666]: E0113 21:26:15.658223 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.658590 kubelet[2666]: E0113 21:26:15.658559 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.658590 kubelet[2666]: W0113 21:26:15.658573 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.658896 kubelet[2666]: E0113 21:26:15.658600 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.658965 kubelet[2666]: E0113 21:26:15.658922 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.658965 kubelet[2666]: W0113 21:26:15.658935 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.658965 kubelet[2666]: E0113 21:26:15.658950 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.660133 kubelet[2666]: E0113 21:26:15.659240 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.660133 kubelet[2666]: W0113 21:26:15.659257 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.660133 kubelet[2666]: E0113 21:26:15.659274 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.660133 kubelet[2666]: E0113 21:26:15.659629 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.660133 kubelet[2666]: W0113 21:26:15.659643 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.660133 kubelet[2666]: E0113 21:26:15.659660 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.660133 kubelet[2666]: E0113 21:26:15.660034 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.660133 kubelet[2666]: W0113 21:26:15.660048 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.660133 kubelet[2666]: E0113 21:26:15.660064 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.661639 kubelet[2666]: E0113 21:26:15.661140 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.661639 kubelet[2666]: W0113 21:26:15.661158 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.661639 kubelet[2666]: E0113 21:26:15.661174 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.661639 kubelet[2666]: E0113 21:26:15.661472 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.661639 kubelet[2666]: W0113 21:26:15.661483 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.661639 kubelet[2666]: E0113 21:26:15.661497 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.682543 kubelet[2666]: E0113 21:26:15.682320 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.682543 kubelet[2666]: W0113 21:26:15.682349 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.682543 kubelet[2666]: E0113 21:26:15.682376 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.683651 kubelet[2666]: E0113 21:26:15.683444 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.683651 kubelet[2666]: W0113 21:26:15.683463 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.683651 kubelet[2666]: E0113 21:26:15.683482 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.684291 kubelet[2666]: E0113 21:26:15.684087 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.684291 kubelet[2666]: W0113 21:26:15.684113 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.684291 kubelet[2666]: E0113 21:26:15.684131 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.684622 kubelet[2666]: E0113 21:26:15.684604 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.684760 kubelet[2666]: W0113 21:26:15.684735 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.685587 kubelet[2666]: E0113 21:26:15.684775 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.685587 kubelet[2666]: E0113 21:26:15.685097 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.685587 kubelet[2666]: W0113 21:26:15.685111 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.685587 kubelet[2666]: E0113 21:26:15.685133 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.685587 kubelet[2666]: E0113 21:26:15.685413 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.685587 kubelet[2666]: W0113 21:26:15.685426 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.685587 kubelet[2666]: E0113 21:26:15.685447 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.686171 kubelet[2666]: E0113 21:26:15.685897 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.686171 kubelet[2666]: W0113 21:26:15.685912 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.686171 kubelet[2666]: E0113 21:26:15.685992 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.686662 kubelet[2666]: E0113 21:26:15.686640 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.686662 kubelet[2666]: W0113 21:26:15.686658 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.686851 kubelet[2666]: E0113 21:26:15.686735 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.687317 kubelet[2666]: E0113 21:26:15.687103 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.687317 kubelet[2666]: W0113 21:26:15.687117 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.687317 kubelet[2666]: E0113 21:26:15.687215 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.687518 kubelet[2666]: E0113 21:26:15.687490 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.687518 kubelet[2666]: W0113 21:26:15.687503 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.687614 kubelet[2666]: E0113 21:26:15.687536 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.687993 kubelet[2666]: E0113 21:26:15.687970 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.687993 kubelet[2666]: W0113 21:26:15.687990 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.688264 kubelet[2666]: E0113 21:26:15.688014 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.688328 kubelet[2666]: E0113 21:26:15.688308 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.688328 kubelet[2666]: W0113 21:26:15.688321 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.688433 kubelet[2666]: E0113 21:26:15.688343 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.688745 kubelet[2666]: E0113 21:26:15.688722 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.688745 kubelet[2666]: W0113 21:26:15.688741 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.689194 kubelet[2666]: E0113 21:26:15.688905 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.689521 kubelet[2666]: E0113 21:26:15.689490 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.689521 kubelet[2666]: W0113 21:26:15.689509 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.689680 kubelet[2666]: E0113 21:26:15.689533 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.689897 kubelet[2666]: E0113 21:26:15.689874 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.689897 kubelet[2666]: W0113 21:26:15.689893 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.690010 kubelet[2666]: E0113 21:26:15.689928 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.690531 kubelet[2666]: E0113 21:26:15.690289 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.690531 kubelet[2666]: W0113 21:26:15.690307 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.690531 kubelet[2666]: E0113 21:26:15.690338 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.691080 kubelet[2666]: E0113 21:26:15.690940 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.691080 kubelet[2666]: W0113 21:26:15.690962 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.691080 kubelet[2666]: E0113 21:26:15.690980 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.691545 kubelet[2666]: E0113 21:26:15.691524 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:26:15.691545 kubelet[2666]: W0113 21:26:15.691541 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:26:15.691676 kubelet[2666]: E0113 21:26:15.691558 2666 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:26:15.694934 containerd[1459]: time="2025-01-13T21:26:15.694876735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:15.696289 containerd[1459]: time="2025-01-13T21:26:15.696217848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 21:26:15.698448 containerd[1459]: time="2025-01-13T21:26:15.698400395Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:15.702191 containerd[1459]: time="2025-01-13T21:26:15.702114225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:15.703341 containerd[1459]: time="2025-01-13T21:26:15.703187705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.010890739s" Jan 13 21:26:15.703341 containerd[1459]: time="2025-01-13T21:26:15.703273673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:26:15.706661 containerd[1459]: time="2025-01-13T21:26:15.706617855Z" level=info msg="CreateContainer within sandbox \"c1671bba9ad9de2be0276dae2c9e4cb8f98b3e44b3b3d2e9affef2771e5239fc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:26:15.728964 containerd[1459]: time="2025-01-13T21:26:15.728909342Z" level=info msg="CreateContainer within sandbox \"c1671bba9ad9de2be0276dae2c9e4cb8f98b3e44b3b3d2e9affef2771e5239fc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d530ce2034853d342da291ce8c5e610dbfd21325865d4269ad743bc65f8090f8\"" Jan 13 21:26:15.731045 containerd[1459]: time="2025-01-13T21:26:15.729576998Z" level=info msg="StartContainer for \"d530ce2034853d342da291ce8c5e610dbfd21325865d4269ad743bc65f8090f8\"" Jan 13 21:26:15.777914 systemd[1]: Started cri-containerd-d530ce2034853d342da291ce8c5e610dbfd21325865d4269ad743bc65f8090f8.scope - libcontainer container d530ce2034853d342da291ce8c5e610dbfd21325865d4269ad743bc65f8090f8. Jan 13 21:26:15.823826 containerd[1459]: time="2025-01-13T21:26:15.823728141Z" level=info msg="StartContainer for \"d530ce2034853d342da291ce8c5e610dbfd21325865d4269ad743bc65f8090f8\" returns successfully" Jan 13 21:26:15.841540 systemd[1]: cri-containerd-d530ce2034853d342da291ce8c5e610dbfd21325865d4269ad743bc65f8090f8.scope: Deactivated successfully. Jan 13 21:26:15.877238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d530ce2034853d342da291ce8c5e610dbfd21325865d4269ad743bc65f8090f8-rootfs.mount: Deactivated successfully. Jan 13 21:26:16.462868 containerd[1459]: time="2025-01-13T21:26:16.462674640Z" level=info msg="shim disconnected" id=d530ce2034853d342da291ce8c5e610dbfd21325865d4269ad743bc65f8090f8 namespace=k8s.io Jan 13 21:26:16.463286 containerd[1459]: time="2025-01-13T21:26:16.462873235Z" level=warning msg="cleaning up after shim disconnected" id=d530ce2034853d342da291ce8c5e610dbfd21325865d4269ad743bc65f8090f8 namespace=k8s.io Jan 13 21:26:16.463286 containerd[1459]: time="2025-01-13T21:26:16.462893219Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:16.469310 kubelet[2666]: E0113 21:26:16.469265 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-brgw4" podUID="c6337092-d429-49f9-9c09-de05379de9a5" Jan 13 21:26:16.613458 kubelet[2666]: I0113 21:26:16.613409 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:26:16.616319 containerd[1459]: time="2025-01-13T21:26:16.615615738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:26:18.470086 kubelet[2666]: E0113 21:26:18.468098 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-brgw4" podUID="c6337092-d429-49f9-9c09-de05379de9a5" Jan 13 21:26:20.467581 kubelet[2666]: E0113 21:26:20.467488 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-brgw4" podUID="c6337092-d429-49f9-9c09-de05379de9a5" Jan 13 21:26:20.492606 containerd[1459]: time="2025-01-13T21:26:20.492481091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:20.494073 containerd[1459]: time="2025-01-13T21:26:20.494000499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:26:20.495848 containerd[1459]: time="2025-01-13T21:26:20.495781599Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:20.499152 containerd[1459]: time="2025-01-13T21:26:20.499067544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:20.500670 containerd[1459]: time="2025-01-13T21:26:20.500072145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.884403363s" Jan 13 21:26:20.500670 containerd[1459]: time="2025-01-13T21:26:20.500119910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:26:20.503643 containerd[1459]: time="2025-01-13T21:26:20.503344654Z" level=info msg="CreateContainer within sandbox \"c1671bba9ad9de2be0276dae2c9e4cb8f98b3e44b3b3d2e9affef2771e5239fc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:26:20.524387 containerd[1459]: time="2025-01-13T21:26:20.524332315Z" level=info msg="CreateContainer within sandbox \"c1671bba9ad9de2be0276dae2c9e4cb8f98b3e44b3b3d2e9affef2771e5239fc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"66672904cf9be759fce8e47f4f0dc9b357b129c8c28844a62ce0700b1892f660\"" Jan 13 21:26:20.525990 containerd[1459]: time="2025-01-13T21:26:20.525840240Z" level=info msg="StartContainer for \"66672904cf9be759fce8e47f4f0dc9b357b129c8c28844a62ce0700b1892f660\"" Jan 13 21:26:20.573908 systemd[1]: Started cri-containerd-66672904cf9be759fce8e47f4f0dc9b357b129c8c28844a62ce0700b1892f660.scope - libcontainer container 66672904cf9be759fce8e47f4f0dc9b357b129c8c28844a62ce0700b1892f660. Jan 13 21:26:20.611897 containerd[1459]: time="2025-01-13T21:26:20.611689733Z" level=info msg="StartContainer for \"66672904cf9be759fce8e47f4f0dc9b357b129c8c28844a62ce0700b1892f660\" returns successfully" Jan 13 21:26:21.594885 containerd[1459]: time="2025-01-13T21:26:21.594801852Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:26:21.597884 systemd[1]: cri-containerd-66672904cf9be759fce8e47f4f0dc9b357b129c8c28844a62ce0700b1892f660.scope: Deactivated successfully. Jan 13 21:26:21.605390 kubelet[2666]: I0113 21:26:21.605358 2666 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:26:21.644957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66672904cf9be759fce8e47f4f0dc9b357b129c8c28844a62ce0700b1892f660-rootfs.mount: Deactivated successfully. Jan 13 21:26:21.661071 kubelet[2666]: I0113 21:26:21.660994 2666 topology_manager.go:215] "Topology Admit Handler" podUID="890d7497-a5c2-420a-b9d1-ef249860cf9d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gfh9x" Jan 13 21:26:21.667530 kubelet[2666]: I0113 21:26:21.667475 2666 topology_manager.go:215] "Topology Admit Handler" podUID="fe1c0d21-cbd2-493c-aac5-49c46482135d" podNamespace="calico-system" podName="calico-kube-controllers-6dfb8cc8cd-swbqm" Jan 13 21:26:21.667747 kubelet[2666]: I0113 21:26:21.667728 2666 topology_manager.go:215] "Topology Admit Handler" podUID="eea4dbd0-48b0-456a-9730-2e0b0b5023a9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6cdjn" Jan 13 21:26:21.675359 kubelet[2666]: I0113 21:26:21.675312 2666 topology_manager.go:215] "Topology Admit Handler" podUID="9bf7ec82-1e0a-4102-bad0-cba9e9a839cf" podNamespace="calico-apiserver" podName="calico-apiserver-6556db8f5f-dtk9h" Jan 13 21:26:21.677336 kubelet[2666]: I0113 21:26:21.677295 2666 topology_manager.go:215] "Topology Admit Handler" podUID="a09ac35e-7d2f-4aff-9b72-388aa54a776e" podNamespace="calico-apiserver" podName="calico-apiserver-6556db8f5f-j25nm" Jan 13 21:26:21.690132 systemd[1]: Created slice kubepods-burstable-pod890d7497_a5c2_420a_b9d1_ef249860cf9d.slice - libcontainer container kubepods-burstable-pod890d7497_a5c2_420a_b9d1_ef249860cf9d.slice. Jan 13 21:26:21.715636 systemd[1]: Created slice kubepods-burstable-podeea4dbd0_48b0_456a_9730_2e0b0b5023a9.slice - libcontainer container kubepods-burstable-podeea4dbd0_48b0_456a_9730_2e0b0b5023a9.slice. Jan 13 21:26:21.726780 kubelet[2666]: I0113 21:26:21.724777 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c5sh\" (UniqueName: \"kubernetes.io/projected/a09ac35e-7d2f-4aff-9b72-388aa54a776e-kube-api-access-8c5sh\") pod \"calico-apiserver-6556db8f5f-j25nm\" (UID: \"a09ac35e-7d2f-4aff-9b72-388aa54a776e\") " pod="calico-apiserver/calico-apiserver-6556db8f5f-j25nm" Jan 13 21:26:21.726780 kubelet[2666]: I0113 21:26:21.724828 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7jdk\" (UniqueName: \"kubernetes.io/projected/890d7497-a5c2-420a-b9d1-ef249860cf9d-kube-api-access-n7jdk\") pod \"coredns-7db6d8ff4d-gfh9x\" (UID: \"890d7497-a5c2-420a-b9d1-ef249860cf9d\") " pod="kube-system/coredns-7db6d8ff4d-gfh9x" Jan 13 21:26:21.726780 kubelet[2666]: I0113 21:26:21.724882 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eea4dbd0-48b0-456a-9730-2e0b0b5023a9-config-volume\") pod \"coredns-7db6d8ff4d-6cdjn\" (UID: \"eea4dbd0-48b0-456a-9730-2e0b0b5023a9\") " pod="kube-system/coredns-7db6d8ff4d-6cdjn" Jan 13 21:26:21.726780 kubelet[2666]: I0113 21:26:21.724952 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1c0d21-cbd2-493c-aac5-49c46482135d-tigera-ca-bundle\") pod \"calico-kube-controllers-6dfb8cc8cd-swbqm\" (UID: \"fe1c0d21-cbd2-493c-aac5-49c46482135d\") " pod="calico-system/calico-kube-controllers-6dfb8cc8cd-swbqm" Jan 13 21:26:21.726780 kubelet[2666]: I0113 21:26:21.724988 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjz4n\" (UniqueName: \"kubernetes.io/projected/9bf7ec82-1e0a-4102-bad0-cba9e9a839cf-kube-api-access-gjz4n\") pod \"calico-apiserver-6556db8f5f-dtk9h\" (UID: \"9bf7ec82-1e0a-4102-bad0-cba9e9a839cf\") " pod="calico-apiserver/calico-apiserver-6556db8f5f-dtk9h" Jan 13 21:26:21.728000 kubelet[2666]: I0113 21:26:21.725018 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clrlr\" (UniqueName: \"kubernetes.io/projected/fe1c0d21-cbd2-493c-aac5-49c46482135d-kube-api-access-clrlr\") pod \"calico-kube-controllers-6dfb8cc8cd-swbqm\" (UID: \"fe1c0d21-cbd2-493c-aac5-49c46482135d\") " pod="calico-system/calico-kube-controllers-6dfb8cc8cd-swbqm" Jan 13 21:26:21.728000 kubelet[2666]: I0113 21:26:21.725047 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/890d7497-a5c2-420a-b9d1-ef249860cf9d-config-volume\") pod \"coredns-7db6d8ff4d-gfh9x\" (UID: \"890d7497-a5c2-420a-b9d1-ef249860cf9d\") " pod="kube-system/coredns-7db6d8ff4d-gfh9x" Jan 13 21:26:21.728000 kubelet[2666]: I0113 21:26:21.725085 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bf7ec82-1e0a-4102-bad0-cba9e9a839cf-calico-apiserver-certs\") pod \"calico-apiserver-6556db8f5f-dtk9h\" (UID: \"9bf7ec82-1e0a-4102-bad0-cba9e9a839cf\") " pod="calico-apiserver/calico-apiserver-6556db8f5f-dtk9h" Jan 13 21:26:21.728000 kubelet[2666]: I0113 21:26:21.725126 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a09ac35e-7d2f-4aff-9b72-388aa54a776e-calico-apiserver-certs\") pod \"calico-apiserver-6556db8f5f-j25nm\" (UID: \"a09ac35e-7d2f-4aff-9b72-388aa54a776e\") " pod="calico-apiserver/calico-apiserver-6556db8f5f-j25nm" Jan 13 21:26:21.728000 kubelet[2666]: I0113 21:26:21.725160 2666 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cljvq\" (UniqueName: \"kubernetes.io/projected/eea4dbd0-48b0-456a-9730-2e0b0b5023a9-kube-api-access-cljvq\") pod \"coredns-7db6d8ff4d-6cdjn\" (UID: \"eea4dbd0-48b0-456a-9730-2e0b0b5023a9\") " pod="kube-system/coredns-7db6d8ff4d-6cdjn" Jan 13 21:26:21.732962 systemd[1]: Created slice kubepods-besteffort-podfe1c0d21_cbd2_493c_aac5_49c46482135d.slice - libcontainer container kubepods-besteffort-podfe1c0d21_cbd2_493c_aac5_49c46482135d.slice. Jan 13 21:26:21.769879 systemd[1]: Created slice kubepods-besteffort-pod9bf7ec82_1e0a_4102_bad0_cba9e9a839cf.slice - libcontainer container kubepods-besteffort-pod9bf7ec82_1e0a_4102_bad0_cba9e9a839cf.slice. Jan 13 21:26:21.780238 systemd[1]: Created slice kubepods-besteffort-poda09ac35e_7d2f_4aff_9b72_388aa54a776e.slice - libcontainer container kubepods-besteffort-poda09ac35e_7d2f_4aff_9b72_388aa54a776e.slice. Jan 13 21:26:22.002494 containerd[1459]: time="2025-01-13T21:26:22.002437380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gfh9x,Uid:890d7497-a5c2-420a-b9d1-ef249860cf9d,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:22.024623 containerd[1459]: time="2025-01-13T21:26:22.024565076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6cdjn,Uid:eea4dbd0-48b0-456a-9730-2e0b0b5023a9,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:22.060144 containerd[1459]: time="2025-01-13T21:26:22.059844133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dfb8cc8cd-swbqm,Uid:fe1c0d21-cbd2-493c-aac5-49c46482135d,Namespace:calico-system,Attempt:0,}" Jan 13 21:26:22.075561 containerd[1459]: time="2025-01-13T21:26:22.075485108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6556db8f5f-dtk9h,Uid:9bf7ec82-1e0a-4102-bad0-cba9e9a839cf,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:26:22.089718 containerd[1459]: time="2025-01-13T21:26:22.089645500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6556db8f5f-j25nm,Uid:a09ac35e-7d2f-4aff-9b72-388aa54a776e,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:26:22.411354 containerd[1459]: time="2025-01-13T21:26:22.411089614Z" level=info msg="shim disconnected" id=66672904cf9be759fce8e47f4f0dc9b357b129c8c28844a62ce0700b1892f660 namespace=k8s.io Jan 13 21:26:22.411354 containerd[1459]: time="2025-01-13T21:26:22.411201824Z" level=warning msg="cleaning up after shim disconnected" id=66672904cf9be759fce8e47f4f0dc9b357b129c8c28844a62ce0700b1892f660 namespace=k8s.io Jan 13 21:26:22.411354 containerd[1459]: time="2025-01-13T21:26:22.411219913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:22.488947 systemd[1]: Created slice kubepods-besteffort-podc6337092_d429_49f9_9c09_de05379de9a5.slice - libcontainer container kubepods-besteffort-podc6337092_d429_49f9_9c09_de05379de9a5.slice. Jan 13 21:26:22.502269 containerd[1459]: time="2025-01-13T21:26:22.501197527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-brgw4,Uid:c6337092-d429-49f9-9c09-de05379de9a5,Namespace:calico-system,Attempt:0,}" Jan 13 21:26:22.692025 containerd[1459]: time="2025-01-13T21:26:22.691834988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:26:22.925340 containerd[1459]: time="2025-01-13T21:26:22.923541091Z" level=error msg="Failed to destroy network for sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.937657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412-shm.mount: Deactivated successfully. Jan 13 21:26:22.945993 containerd[1459]: time="2025-01-13T21:26:22.945523110Z" level=error msg="Failed to destroy network for sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.947030 containerd[1459]: time="2025-01-13T21:26:22.946740612Z" level=error msg="encountered an error cleaning up failed sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.947343 containerd[1459]: time="2025-01-13T21:26:22.947297041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gfh9x,Uid:890d7497-a5c2-420a-b9d1-ef249860cf9d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.948981 kubelet[2666]: E0113 21:26:22.947844 2666 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.948981 kubelet[2666]: E0113 21:26:22.947953 2666 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gfh9x" Jan 13 21:26:22.948981 kubelet[2666]: E0113 21:26:22.947988 2666 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gfh9x" Jan 13 21:26:22.949845 containerd[1459]: time="2025-01-13T21:26:22.948793934Z" level=error msg="encountered an error cleaning up failed sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.949845 containerd[1459]: time="2025-01-13T21:26:22.948883805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6556db8f5f-dtk9h,Uid:9bf7ec82-1e0a-4102-bad0-cba9e9a839cf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.950044 kubelet[2666]: E0113 21:26:22.948058 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gfh9x_kube-system(890d7497-a5c2-420a-b9d1-ef249860cf9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gfh9x_kube-system(890d7497-a5c2-420a-b9d1-ef249860cf9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gfh9x" podUID="890d7497-a5c2-420a-b9d1-ef249860cf9d" Jan 13 21:26:22.952723 kubelet[2666]: E0113 21:26:22.950948 2666 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.952723 kubelet[2666]: E0113 21:26:22.951081 2666 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6556db8f5f-dtk9h" Jan 13 21:26:22.952723 kubelet[2666]: E0113 21:26:22.951174 2666 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6556db8f5f-dtk9h" Jan 13 21:26:22.952990 kubelet[2666]: E0113 21:26:22.951287 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6556db8f5f-dtk9h_calico-apiserver(9bf7ec82-1e0a-4102-bad0-cba9e9a839cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6556db8f5f-dtk9h_calico-apiserver(9bf7ec82-1e0a-4102-bad0-cba9e9a839cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6556db8f5f-dtk9h" podUID="9bf7ec82-1e0a-4102-bad0-cba9e9a839cf" Jan 13 21:26:22.955662 containerd[1459]: time="2025-01-13T21:26:22.955604302Z" level=error msg="Failed to destroy network for sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.956335 containerd[1459]: time="2025-01-13T21:26:22.956290141Z" level=error msg="encountered an error cleaning up failed sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.957063 containerd[1459]: time="2025-01-13T21:26:22.957004640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6cdjn,Uid:eea4dbd0-48b0-456a-9730-2e0b0b5023a9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.957579 kubelet[2666]: E0113 21:26:22.957529 2666 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.957871 kubelet[2666]: E0113 21:26:22.957838 2666 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6cdjn" Jan 13 21:26:22.958094 kubelet[2666]: E0113 21:26:22.958065 2666 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6cdjn" Jan 13 21:26:22.958577 kubelet[2666]: E0113 21:26:22.958509 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6cdjn_kube-system(eea4dbd0-48b0-456a-9730-2e0b0b5023a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6cdjn_kube-system(eea4dbd0-48b0-456a-9730-2e0b0b5023a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6cdjn" podUID="eea4dbd0-48b0-456a-9730-2e0b0b5023a9" Jan 13 21:26:22.959326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9-shm.mount: Deactivated successfully. Jan 13 21:26:22.974402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf-shm.mount: Deactivated successfully. Jan 13 21:26:22.985324 containerd[1459]: time="2025-01-13T21:26:22.985241642Z" level=error msg="Failed to destroy network for sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.987010 containerd[1459]: time="2025-01-13T21:26:22.986807824Z" level=error msg="encountered an error cleaning up failed sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.987010 containerd[1459]: time="2025-01-13T21:26:22.986963971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6556db8f5f-j25nm,Uid:a09ac35e-7d2f-4aff-9b72-388aa54a776e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.988334 kubelet[2666]: E0113 21:26:22.987645 2666 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:22.988334 kubelet[2666]: E0113 21:26:22.987833 2666 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6556db8f5f-j25nm" Jan 13 21:26:22.988334 kubelet[2666]: E0113 21:26:22.987871 2666 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6556db8f5f-j25nm" Jan 13 21:26:22.988610 kubelet[2666]: E0113 21:26:22.987964 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6556db8f5f-j25nm_calico-apiserver(a09ac35e-7d2f-4aff-9b72-388aa54a776e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6556db8f5f-j25nm_calico-apiserver(a09ac35e-7d2f-4aff-9b72-388aa54a776e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6556db8f5f-j25nm" podUID="a09ac35e-7d2f-4aff-9b72-388aa54a776e" Jan 13 21:26:22.994974 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab-shm.mount: Deactivated successfully. Jan 13 21:26:22.998984 containerd[1459]: time="2025-01-13T21:26:22.998886762Z" level=error msg="Failed to destroy network for sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.000110 containerd[1459]: time="2025-01-13T21:26:22.999952921Z" level=error msg="encountered an error cleaning up failed sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.000336 containerd[1459]: time="2025-01-13T21:26:23.000086772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dfb8cc8cd-swbqm,Uid:fe1c0d21-cbd2-493c-aac5-49c46482135d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.002044 kubelet[2666]: E0113 21:26:23.000863 2666 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.002044 kubelet[2666]: E0113 21:26:23.000951 2666 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dfb8cc8cd-swbqm" Jan 13 21:26:23.002044 kubelet[2666]: E0113 21:26:23.000986 2666 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dfb8cc8cd-swbqm" Jan 13 21:26:23.002273 kubelet[2666]: E0113 21:26:23.001067 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dfb8cc8cd-swbqm_calico-system(fe1c0d21-cbd2-493c-aac5-49c46482135d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dfb8cc8cd-swbqm_calico-system(fe1c0d21-cbd2-493c-aac5-49c46482135d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dfb8cc8cd-swbqm" podUID="fe1c0d21-cbd2-493c-aac5-49c46482135d" Jan 13 21:26:23.004676 containerd[1459]: time="2025-01-13T21:26:23.004624407Z" level=error msg="Failed to destroy network for sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.005178 containerd[1459]: time="2025-01-13T21:26:23.005127972Z" level=error msg="encountered an error cleaning up failed sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.006038 containerd[1459]: time="2025-01-13T21:26:23.005252786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-brgw4,Uid:c6337092-d429-49f9-9c09-de05379de9a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.006144 kubelet[2666]: E0113 21:26:23.005664 2666 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.006144 kubelet[2666]: E0113 21:26:23.005754 2666 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-brgw4" Jan 13 21:26:23.006144 kubelet[2666]: E0113 21:26:23.005786 2666 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-brgw4" Jan 13 21:26:23.006350 kubelet[2666]: E0113 21:26:23.006028 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-brgw4_calico-system(c6337092-d429-49f9-9c09-de05379de9a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-brgw4_calico-system(c6337092-d429-49f9-9c09-de05379de9a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-brgw4" podUID="c6337092-d429-49f9-9c09-de05379de9a5" Jan 13 21:26:23.646879 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a-shm.mount: Deactivated successfully. Jan 13 21:26:23.647127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad-shm.mount: Deactivated successfully. Jan 13 21:26:23.689269 kubelet[2666]: I0113 21:26:23.689231 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:23.691560 containerd[1459]: time="2025-01-13T21:26:23.690911728Z" level=info msg="StopPodSandbox for \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\"" Jan 13 21:26:23.691560 containerd[1459]: time="2025-01-13T21:26:23.691186168Z" level=info msg="Ensure that sandbox b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf in task-service has been cleanup successfully" Jan 13 21:26:23.709299 kubelet[2666]: I0113 21:26:23.709269 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:23.714311 containerd[1459]: time="2025-01-13T21:26:23.712602875Z" level=info msg="StopPodSandbox for \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\"" Jan 13 21:26:23.714311 containerd[1459]: time="2025-01-13T21:26:23.712871430Z" level=info msg="Ensure that sandbox 528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412 in task-service has been cleanup successfully" Jan 13 21:26:23.717806 kubelet[2666]: I0113 21:26:23.717687 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:23.722117 containerd[1459]: time="2025-01-13T21:26:23.722072273Z" level=info msg="StopPodSandbox for \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\"" Jan 13 21:26:23.722580 containerd[1459]: time="2025-01-13T21:26:23.722323322Z" level=info msg="Ensure that sandbox 444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab in task-service has been cleanup successfully" Jan 13 21:26:23.734200 kubelet[2666]: I0113 21:26:23.734140 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:23.739286 containerd[1459]: time="2025-01-13T21:26:23.739184172Z" level=info msg="StopPodSandbox for \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\"" Jan 13 21:26:23.741622 containerd[1459]: time="2025-01-13T21:26:23.741368561Z" level=info msg="Ensure that sandbox 2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9 in task-service has been cleanup successfully" Jan 13 21:26:23.747817 kubelet[2666]: I0113 21:26:23.747086 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:23.755133 containerd[1459]: time="2025-01-13T21:26:23.755086390Z" level=info msg="StopPodSandbox for \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\"" Jan 13 21:26:23.755375 containerd[1459]: time="2025-01-13T21:26:23.755341951Z" level=info msg="Ensure that sandbox 2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad in task-service has been cleanup successfully" Jan 13 21:26:23.773122 kubelet[2666]: I0113 21:26:23.773059 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:23.779495 containerd[1459]: time="2025-01-13T21:26:23.779446547Z" level=info msg="StopPodSandbox for \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\"" Jan 13 21:26:23.779773 containerd[1459]: time="2025-01-13T21:26:23.779724959Z" level=info msg="Ensure that sandbox 39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a in task-service has been cleanup successfully" Jan 13 21:26:23.874628 containerd[1459]: time="2025-01-13T21:26:23.874448396Z" level=error msg="StopPodSandbox for \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\" failed" error="failed to destroy network for sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.874960 kubelet[2666]: E0113 21:26:23.874739 2666 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:23.874960 kubelet[2666]: E0113 21:26:23.874813 2666 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412"} Jan 13 21:26:23.874960 kubelet[2666]: E0113 21:26:23.874923 2666 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"890d7497-a5c2-420a-b9d1-ef249860cf9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:26:23.875564 kubelet[2666]: E0113 21:26:23.874964 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"890d7497-a5c2-420a-b9d1-ef249860cf9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gfh9x" podUID="890d7497-a5c2-420a-b9d1-ef249860cf9d" Jan 13 21:26:23.875963 containerd[1459]: time="2025-01-13T21:26:23.875738898Z" level=error msg="StopPodSandbox for \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\" failed" error="failed to destroy network for sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.876167 kubelet[2666]: E0113 21:26:23.875971 2666 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:23.876167 kubelet[2666]: E0113 21:26:23.876026 2666 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf"} Jan 13 21:26:23.876167 kubelet[2666]: E0113 21:26:23.876067 2666 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eea4dbd0-48b0-456a-9730-2e0b0b5023a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:26:23.876167 kubelet[2666]: E0113 21:26:23.876099 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eea4dbd0-48b0-456a-9730-2e0b0b5023a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6cdjn" podUID="eea4dbd0-48b0-456a-9730-2e0b0b5023a9" Jan 13 21:26:23.894731 containerd[1459]: time="2025-01-13T21:26:23.894305208Z" level=error msg="StopPodSandbox for \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\" failed" error="failed to destroy network for sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.894900 kubelet[2666]: E0113 21:26:23.894583 2666 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:23.895283 kubelet[2666]: E0113 21:26:23.895055 2666 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9"} Jan 13 21:26:23.895283 kubelet[2666]: E0113 21:26:23.895156 2666 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9bf7ec82-1e0a-4102-bad0-cba9e9a839cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:26:23.895283 kubelet[2666]: E0113 21:26:23.895217 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9bf7ec82-1e0a-4102-bad0-cba9e9a839cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6556db8f5f-dtk9h" podUID="9bf7ec82-1e0a-4102-bad0-cba9e9a839cf" Jan 13 21:26:23.903804 containerd[1459]: time="2025-01-13T21:26:23.903643907Z" level=error msg="StopPodSandbox for \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\" failed" error="failed to destroy network for sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.904837 kubelet[2666]: E0113 21:26:23.904560 2666 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:23.904837 kubelet[2666]: E0113 21:26:23.904615 2666 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab"} Jan 13 21:26:23.904837 kubelet[2666]: E0113 21:26:23.904668 2666 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a09ac35e-7d2f-4aff-9b72-388aa54a776e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:26:23.905630 kubelet[2666]: E0113 21:26:23.905131 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a09ac35e-7d2f-4aff-9b72-388aa54a776e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6556db8f5f-j25nm" podUID="a09ac35e-7d2f-4aff-9b72-388aa54a776e" Jan 13 21:26:23.920969 containerd[1459]: time="2025-01-13T21:26:23.920903273Z" level=error msg="StopPodSandbox for \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\" failed" error="failed to destroy network for sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.922063 kubelet[2666]: E0113 21:26:23.921197 2666 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:23.922063 kubelet[2666]: E0113 21:26:23.921835 2666 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad"} Jan 13 21:26:23.922063 kubelet[2666]: E0113 21:26:23.921944 2666 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe1c0d21-cbd2-493c-aac5-49c46482135d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:26:23.922063 kubelet[2666]: E0113 21:26:23.922005 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe1c0d21-cbd2-493c-aac5-49c46482135d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dfb8cc8cd-swbqm" podUID="fe1c0d21-cbd2-493c-aac5-49c46482135d" Jan 13 21:26:23.926338 containerd[1459]: time="2025-01-13T21:26:23.925850417Z" level=error msg="StopPodSandbox for \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\" failed" error="failed to destroy network for sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:26:23.926473 kubelet[2666]: E0113 21:26:23.926089 2666 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:23.926473 kubelet[2666]: E0113 21:26:23.926141 2666 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a"} Jan 13 21:26:23.926473 kubelet[2666]: E0113 21:26:23.926197 2666 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6337092-d429-49f9-9c09-de05379de9a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:26:23.926473 kubelet[2666]: E0113 21:26:23.926257 2666 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6337092-d429-49f9-9c09-de05379de9a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-brgw4" podUID="c6337092-d429-49f9-9c09-de05379de9a5" Jan 13 21:26:29.660131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount966251575.mount: Deactivated successfully. Jan 13 21:26:29.707525 containerd[1459]: time="2025-01-13T21:26:29.707434798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:29.709047 containerd[1459]: time="2025-01-13T21:26:29.708956587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:26:29.710916 containerd[1459]: time="2025-01-13T21:26:29.710835841Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:29.715870 containerd[1459]: time="2025-01-13T21:26:29.715606341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:29.717393 containerd[1459]: time="2025-01-13T21:26:29.716727908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.022429173s" Jan 13 21:26:29.717393 containerd[1459]: time="2025-01-13T21:26:29.716784687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:26:29.742753 containerd[1459]: time="2025-01-13T21:26:29.741996128Z" level=info msg="CreateContainer within sandbox \"c1671bba9ad9de2be0276dae2c9e4cb8f98b3e44b3b3d2e9affef2771e5239fc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:26:29.769739 containerd[1459]: time="2025-01-13T21:26:29.769585984Z" level=info msg="CreateContainer within sandbox \"c1671bba9ad9de2be0276dae2c9e4cb8f98b3e44b3b3d2e9affef2771e5239fc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"77e9ca8c2795135a48c09766eddcdf9b5c57550c44751ba564aed59984e26804\"" Jan 13 21:26:29.770673 containerd[1459]: time="2025-01-13T21:26:29.770493980Z" level=info msg="StartContainer for \"77e9ca8c2795135a48c09766eddcdf9b5c57550c44751ba564aed59984e26804\"" Jan 13 21:26:29.829043 systemd[1]: Started cri-containerd-77e9ca8c2795135a48c09766eddcdf9b5c57550c44751ba564aed59984e26804.scope - libcontainer container 77e9ca8c2795135a48c09766eddcdf9b5c57550c44751ba564aed59984e26804. Jan 13 21:26:29.883163 containerd[1459]: time="2025-01-13T21:26:29.882818049Z" level=info msg="StartContainer for \"77e9ca8c2795135a48c09766eddcdf9b5c57550c44751ba564aed59984e26804\" returns successfully" Jan 13 21:26:30.008003 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:26:30.008248 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:26:30.843345 kubelet[2666]: I0113 21:26:30.842687 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7fcj5" podStartSLOduration=1.858131208 podStartE2EDuration="18.842655096s" podCreationTimestamp="2025-01-13 21:26:12 +0000 UTC" firstStartedPulling="2025-01-13 21:26:12.733586141 +0000 UTC m=+26.441441245" lastFinishedPulling="2025-01-13 21:26:29.718110021 +0000 UTC m=+43.425965133" observedRunningTime="2025-01-13 21:26:30.840448583 +0000 UTC m=+44.548303727" watchObservedRunningTime="2025-01-13 21:26:30.842655096 +0000 UTC m=+44.550510278" Jan 13 21:26:31.780564 kubelet[2666]: I0113 21:26:31.780065 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:26:31.885900 systemd[1]: run-containerd-runc-k8s.io-77e9ca8c2795135a48c09766eddcdf9b5c57550c44751ba564aed59984e26804-runc.7qETPj.mount: Deactivated successfully. Jan 13 21:26:31.991906 kernel: bpftool[3921]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:26:32.293194 systemd-networkd[1374]: vxlan.calico: Link UP Jan 13 21:26:32.294779 systemd-networkd[1374]: vxlan.calico: Gained carrier Jan 13 21:26:33.479061 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Jan 13 21:26:34.469548 containerd[1459]: time="2025-01-13T21:26:34.469430591Z" level=info msg="StopPodSandbox for \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\"" Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.542 [INFO][4042] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.544 [INFO][4042] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" iface="eth0" netns="/var/run/netns/cni-b75e291a-5f36-3038-2944-285af1057136" Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.544 [INFO][4042] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" iface="eth0" netns="/var/run/netns/cni-b75e291a-5f36-3038-2944-285af1057136" Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.545 [INFO][4042] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" iface="eth0" netns="/var/run/netns/cni-b75e291a-5f36-3038-2944-285af1057136" Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.545 [INFO][4042] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.545 [INFO][4042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.577 [INFO][4049] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" HandleID="k8s-pod-network.444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.578 [INFO][4049] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.578 [INFO][4049] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.586 [WARNING][4049] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" HandleID="k8s-pod-network.444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.586 [INFO][4049] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" HandleID="k8s-pod-network.444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.590 [INFO][4049] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:34.594742 containerd[1459]: 2025-01-13 21:26:34.592 [INFO][4042] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:34.601725 containerd[1459]: time="2025-01-13T21:26:34.595808634Z" level=info msg="TearDown network for sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\" successfully" Jan 13 21:26:34.601725 containerd[1459]: time="2025-01-13T21:26:34.595863226Z" level=info msg="StopPodSandbox for \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\" returns successfully" Jan 13 21:26:34.601725 containerd[1459]: time="2025-01-13T21:26:34.598104551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6556db8f5f-j25nm,Uid:a09ac35e-7d2f-4aff-9b72-388aa54a776e,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:26:34.602343 systemd[1]: run-netns-cni\x2db75e291a\x2d5f36\x2d3038\x2d2944\x2d285af1057136.mount: Deactivated successfully. Jan 13 21:26:34.781414 systemd-networkd[1374]: cali704d7ebc52d: Link UP Jan 13 21:26:34.785031 systemd-networkd[1374]: cali704d7ebc52d: Gained carrier Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.680 [INFO][4056] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0 calico-apiserver-6556db8f5f- calico-apiserver a09ac35e-7d2f-4aff-9b72-388aa54a776e 784 0 2025-01-13 21:26:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6556db8f5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal calico-apiserver-6556db8f5f-j25nm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali704d7ebc52d [] []}} ContainerID="36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-j25nm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.680 [INFO][4056] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-j25nm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.723 [INFO][4066] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" HandleID="k8s-pod-network.36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.737 [INFO][4066] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" HandleID="k8s-pod-network.36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", "pod":"calico-apiserver-6556db8f5f-j25nm", "timestamp":"2025-01-13 21:26:34.723271837 +0000 UTC"}, Hostname:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.737 [INFO][4066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.737 [INFO][4066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.737 [INFO][4066] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal' Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.739 [INFO][4066] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.745 [INFO][4066] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.751 [INFO][4066] ipam/ipam.go 489: Trying affinity for 192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.754 [INFO][4066] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.757 [INFO][4066] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.757 [INFO][4066] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.759 [INFO][4066] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031 Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.765 [INFO][4066] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.773 [INFO][4066] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.193/26] block=192.168.91.192/26 handle="k8s-pod-network.36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.773 [INFO][4066] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.193/26] handle="k8s-pod-network.36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.773 [INFO][4066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:34.810172 containerd[1459]: 2025-01-13 21:26:34.773 [INFO][4066] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.193/26] IPv6=[] ContainerID="36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" HandleID="k8s-pod-network.36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:34.820196 containerd[1459]: 2025-01-13 21:26:34.775 [INFO][4056] cni-plugin/k8s.go 386: Populated endpoint ContainerID="36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-j25nm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0", GenerateName:"calico-apiserver-6556db8f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a09ac35e-7d2f-4aff-9b72-388aa54a776e", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6556db8f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-6556db8f5f-j25nm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali704d7ebc52d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:34.820196 containerd[1459]: 2025-01-13 21:26:34.776 [INFO][4056] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.193/32] ContainerID="36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-j25nm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:34.820196 containerd[1459]: 2025-01-13 21:26:34.776 [INFO][4056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali704d7ebc52d ContainerID="36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-j25nm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:34.820196 containerd[1459]: 2025-01-13 21:26:34.779 [INFO][4056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-j25nm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:34.820196 containerd[1459]: 2025-01-13 21:26:34.780 [INFO][4056] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-j25nm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0", GenerateName:"calico-apiserver-6556db8f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a09ac35e-7d2f-4aff-9b72-388aa54a776e", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6556db8f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031", Pod:"calico-apiserver-6556db8f5f-j25nm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali704d7ebc52d", MAC:"f6:01:52:ed:d5:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:34.820196 containerd[1459]: 2025-01-13 21:26:34.806 [INFO][4056] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-j25nm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:34.859521 containerd[1459]: time="2025-01-13T21:26:34.859218190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:34.859899 containerd[1459]: time="2025-01-13T21:26:34.859549607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:34.859899 containerd[1459]: time="2025-01-13T21:26:34.859579588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:34.859899 containerd[1459]: time="2025-01-13T21:26:34.859763558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:34.902997 systemd[1]: Started cri-containerd-36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031.scope - libcontainer container 36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031. Jan 13 21:26:34.970652 containerd[1459]: time="2025-01-13T21:26:34.970591935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6556db8f5f-j25nm,Uid:a09ac35e-7d2f-4aff-9b72-388aa54a776e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031\"" Jan 13 21:26:34.976606 containerd[1459]: time="2025-01-13T21:26:34.976553136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:26:35.469400 containerd[1459]: time="2025-01-13T21:26:35.468826359Z" level=info msg="StopPodSandbox for \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\"" Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.539 [INFO][4138] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.540 [INFO][4138] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" iface="eth0" netns="/var/run/netns/cni-ccb4493b-fd14-0d07-2703-bb17b11560a6" Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.541 [INFO][4138] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" iface="eth0" netns="/var/run/netns/cni-ccb4493b-fd14-0d07-2703-bb17b11560a6" Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.541 [INFO][4138] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" iface="eth0" netns="/var/run/netns/cni-ccb4493b-fd14-0d07-2703-bb17b11560a6" Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.542 [INFO][4138] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.542 [INFO][4138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.575 [INFO][4144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" HandleID="k8s-pod-network.2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.575 [INFO][4144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.575 [INFO][4144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.582 [WARNING][4144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" HandleID="k8s-pod-network.2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.582 [INFO][4144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" HandleID="k8s-pod-network.2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.584 [INFO][4144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:35.587163 containerd[1459]: 2025-01-13 21:26:35.585 [INFO][4138] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:35.588604 containerd[1459]: time="2025-01-13T21:26:35.587933059Z" level=info msg="TearDown network for sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\" successfully" Jan 13 21:26:35.588604 containerd[1459]: time="2025-01-13T21:26:35.587979678Z" level=info msg="StopPodSandbox for \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\" returns successfully" Jan 13 21:26:35.590670 containerd[1459]: time="2025-01-13T21:26:35.589765176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dfb8cc8cd-swbqm,Uid:fe1c0d21-cbd2-493c-aac5-49c46482135d,Namespace:calico-system,Attempt:1,}" Jan 13 21:26:35.592586 systemd[1]: run-netns-cni\x2dccb4493b\x2dfd14\x2d0d07\x2d2703\x2dbb17b11560a6.mount: Deactivated successfully. Jan 13 21:26:35.746689 systemd-networkd[1374]: cali1bceffb2ec1: Link UP Jan 13 21:26:35.747916 systemd-networkd[1374]: cali1bceffb2ec1: Gained carrier Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.667 [INFO][4150] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0 calico-kube-controllers-6dfb8cc8cd- calico-system fe1c0d21-cbd2-493c-aac5-49c46482135d 794 0 2025-01-13 21:26:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dfb8cc8cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal calico-kube-controllers-6dfb8cc8cd-swbqm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1bceffb2ec1 [] []}} ContainerID="efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" Namespace="calico-system" Pod="calico-kube-controllers-6dfb8cc8cd-swbqm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.667 [INFO][4150] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" Namespace="calico-system" Pod="calico-kube-controllers-6dfb8cc8cd-swbqm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.698 [INFO][4161] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" HandleID="k8s-pod-network.efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.709 [INFO][4161] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" HandleID="k8s-pod-network.efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", "pod":"calico-kube-controllers-6dfb8cc8cd-swbqm", "timestamp":"2025-01-13 21:26:35.698550882 +0000 UTC"}, Hostname:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.709 [INFO][4161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.710 [INFO][4161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.710 [INFO][4161] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal' Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.712 [INFO][4161] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.717 [INFO][4161] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.722 [INFO][4161] ipam/ipam.go 489: Trying affinity for 192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.724 [INFO][4161] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.726 [INFO][4161] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.726 [INFO][4161] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.727 [INFO][4161] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.732 [INFO][4161] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.739 [INFO][4161] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.194/26] block=192.168.91.192/26 handle="k8s-pod-network.efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.739 [INFO][4161] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.194/26] handle="k8s-pod-network.efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.739 [INFO][4161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:35.773922 containerd[1459]: 2025-01-13 21:26:35.739 [INFO][4161] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.194/26] IPv6=[] ContainerID="efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" HandleID="k8s-pod-network.efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:35.776379 containerd[1459]: 2025-01-13 21:26:35.742 [INFO][4150] cni-plugin/k8s.go 386: Populated endpoint ContainerID="efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" Namespace="calico-system" Pod="calico-kube-controllers-6dfb8cc8cd-swbqm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0", GenerateName:"calico-kube-controllers-6dfb8cc8cd-", Namespace:"calico-system", SelfLink:"", UID:"fe1c0d21-cbd2-493c-aac5-49c46482135d", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dfb8cc8cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-6dfb8cc8cd-swbqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1bceffb2ec1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:35.776379 containerd[1459]: 2025-01-13 21:26:35.742 [INFO][4150] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.194/32] ContainerID="efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" Namespace="calico-system" Pod="calico-kube-controllers-6dfb8cc8cd-swbqm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:35.776379 containerd[1459]: 2025-01-13 21:26:35.742 [INFO][4150] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bceffb2ec1 ContainerID="efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" Namespace="calico-system" Pod="calico-kube-controllers-6dfb8cc8cd-swbqm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:35.776379 containerd[1459]: 2025-01-13 21:26:35.746 [INFO][4150] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" Namespace="calico-system" Pod="calico-kube-controllers-6dfb8cc8cd-swbqm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:35.776379 containerd[1459]: 2025-01-13 21:26:35.748 [INFO][4150] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" Namespace="calico-system" Pod="calico-kube-controllers-6dfb8cc8cd-swbqm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0", GenerateName:"calico-kube-controllers-6dfb8cc8cd-", Namespace:"calico-system", SelfLink:"", UID:"fe1c0d21-cbd2-493c-aac5-49c46482135d", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dfb8cc8cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf", Pod:"calico-kube-controllers-6dfb8cc8cd-swbqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1bceffb2ec1", MAC:"16:ae:62:b9:de:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:35.776379 containerd[1459]: 2025-01-13 21:26:35.769 [INFO][4150] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf" Namespace="calico-system" Pod="calico-kube-controllers-6dfb8cc8cd-swbqm" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:35.834397 containerd[1459]: time="2025-01-13T21:26:35.833670509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:35.834397 containerd[1459]: time="2025-01-13T21:26:35.833770073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:35.835423 containerd[1459]: time="2025-01-13T21:26:35.834724917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:35.835423 containerd[1459]: time="2025-01-13T21:26:35.834914055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:35.888952 systemd[1]: Started cri-containerd-efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf.scope - libcontainer container efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf. Jan 13 21:26:35.963213 containerd[1459]: time="2025-01-13T21:26:35.963154374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dfb8cc8cd-swbqm,Uid:fe1c0d21-cbd2-493c-aac5-49c46482135d,Namespace:calico-system,Attempt:1,} returns sandbox id \"efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf\"" Jan 13 21:26:36.473362 containerd[1459]: time="2025-01-13T21:26:36.473281647Z" level=info msg="StopPodSandbox for \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\"" Jan 13 21:26:36.680140 systemd-networkd[1374]: cali704d7ebc52d: Gained IPv6LL Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.575 [INFO][4237] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.576 [INFO][4237] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" iface="eth0" netns="/var/run/netns/cni-1702adc6-5cb8-a1b0-d206-6babd186aed5" Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.576 [INFO][4237] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" iface="eth0" netns="/var/run/netns/cni-1702adc6-5cb8-a1b0-d206-6babd186aed5" Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.576 [INFO][4237] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" iface="eth0" netns="/var/run/netns/cni-1702adc6-5cb8-a1b0-d206-6babd186aed5" Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.577 [INFO][4237] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.577 [INFO][4237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.698 [INFO][4243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" HandleID="k8s-pod-network.b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.699 [INFO][4243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.699 [INFO][4243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.718 [WARNING][4243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" HandleID="k8s-pod-network.b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.718 [INFO][4243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" HandleID="k8s-pod-network.b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.722 [INFO][4243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:36.734879 containerd[1459]: 2025-01-13 21:26:36.725 [INFO][4237] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:36.746981 containerd[1459]: time="2025-01-13T21:26:36.742408465Z" level=info msg="TearDown network for sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\" successfully" Jan 13 21:26:36.746981 containerd[1459]: time="2025-01-13T21:26:36.742510648Z" level=info msg="StopPodSandbox for \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\" returns successfully" Jan 13 21:26:36.746981 containerd[1459]: time="2025-01-13T21:26:36.743982098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6cdjn,Uid:eea4dbd0-48b0-456a-9730-2e0b0b5023a9,Namespace:kube-system,Attempt:1,}" Jan 13 21:26:36.742770 systemd[1]: run-netns-cni\x2d1702adc6\x2d5cb8\x2da1b0\x2dd206\x2d6babd186aed5.mount: Deactivated successfully. Jan 13 21:26:36.806894 systemd-networkd[1374]: cali1bceffb2ec1: Gained IPv6LL Jan 13 21:26:36.935260 systemd[1]: Started sshd@9-10.128.0.101:22-147.75.109.163:52396.service - OpenSSH per-connection server daemon (147.75.109.163:52396). Jan 13 21:26:37.228186 systemd-networkd[1374]: calif4fe998e4da: Link UP Jan 13 21:26:37.230780 systemd-networkd[1374]: calif4fe998e4da: Gained carrier Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.059 [INFO][4251] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0 coredns-7db6d8ff4d- kube-system eea4dbd0-48b0-456a-9730-2e0b0b5023a9 801 0 2025-01-13 21:26:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal coredns-7db6d8ff4d-6cdjn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif4fe998e4da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6cdjn" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.059 [INFO][4251] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6cdjn" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.134 [INFO][4268] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" HandleID="k8s-pod-network.9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.155 [INFO][4268] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" HandleID="k8s-pod-network.9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ba40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-6cdjn", "timestamp":"2025-01-13 21:26:37.134607475 +0000 UTC"}, Hostname:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.155 [INFO][4268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.155 [INFO][4268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.155 [INFO][4268] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal' Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.158 [INFO][4268] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.168 [INFO][4268] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.174 [INFO][4268] ipam/ipam.go 489: Trying affinity for 192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.178 [INFO][4268] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.183 [INFO][4268] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.183 [INFO][4268] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.186 [INFO][4268] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.194 [INFO][4268] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.218 [INFO][4268] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.195/26] block=192.168.91.192/26 handle="k8s-pod-network.9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.218 [INFO][4268] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.195/26] handle="k8s-pod-network.9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.218 [INFO][4268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:37.269993 containerd[1459]: 2025-01-13 21:26:37.218 [INFO][4268] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.195/26] IPv6=[] ContainerID="9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" HandleID="k8s-pod-network.9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:37.272243 containerd[1459]: 2025-01-13 21:26:37.222 [INFO][4251] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6cdjn" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eea4dbd0-48b0-456a-9730-2e0b0b5023a9", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-6cdjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4fe998e4da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:37.272243 containerd[1459]: 2025-01-13 21:26:37.222 [INFO][4251] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.195/32] ContainerID="9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6cdjn" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:37.272243 containerd[1459]: 2025-01-13 21:26:37.222 [INFO][4251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4fe998e4da ContainerID="9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6cdjn" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:37.272243 containerd[1459]: 2025-01-13 21:26:37.233 [INFO][4251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6cdjn" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:37.272243 containerd[1459]: 2025-01-13 21:26:37.234 [INFO][4251] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6cdjn" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eea4dbd0-48b0-456a-9730-2e0b0b5023a9", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a", Pod:"coredns-7db6d8ff4d-6cdjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4fe998e4da", MAC:"3e:31:98:aa:1c:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:37.272243 containerd[1459]: 2025-01-13 21:26:37.260 [INFO][4251] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6cdjn" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:37.308086 sshd[4260]: Accepted publickey for core from 147.75.109.163 port 52396 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:26:37.309632 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:37.328221 systemd-logind[1439]: New session 10 of user core. Jan 13 21:26:37.339507 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:26:37.370528 containerd[1459]: time="2025-01-13T21:26:37.369979897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:37.370528 containerd[1459]: time="2025-01-13T21:26:37.370138803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:37.370528 containerd[1459]: time="2025-01-13T21:26:37.370172009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:37.370528 containerd[1459]: time="2025-01-13T21:26:37.370340190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:37.428992 systemd[1]: Started cri-containerd-9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a.scope - libcontainer container 9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a. Jan 13 21:26:37.475114 containerd[1459]: time="2025-01-13T21:26:37.475044149Z" level=info msg="StopPodSandbox for \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\"" Jan 13 21:26:37.476423 containerd[1459]: time="2025-01-13T21:26:37.475938597Z" level=info msg="StopPodSandbox for \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\"" Jan 13 21:26:37.481759 containerd[1459]: time="2025-01-13T21:26:37.481105191Z" level=info msg="StopPodSandbox for \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\"" Jan 13 21:26:37.674140 containerd[1459]: time="2025-01-13T21:26:37.673080391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6cdjn,Uid:eea4dbd0-48b0-456a-9730-2e0b0b5023a9,Namespace:kube-system,Attempt:1,} returns sandbox id \"9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a\"" Jan 13 21:26:37.704928 containerd[1459]: time="2025-01-13T21:26:37.704867285Z" level=info msg="CreateContainer within sandbox \"9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:26:37.760839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1711615432.mount: Deactivated successfully. Jan 13 21:26:37.765511 containerd[1459]: time="2025-01-13T21:26:37.763461867Z" level=info msg="CreateContainer within sandbox \"9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f82fa8931e177ba0b382ae5382d450f11d4190bc97dbc415cad4b149d0f3e72d\"" Jan 13 21:26:37.765331 sshd[4260]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:37.769826 containerd[1459]: time="2025-01-13T21:26:37.768899435Z" level=info msg="StartContainer for \"f82fa8931e177ba0b382ae5382d450f11d4190bc97dbc415cad4b149d0f3e72d\"" Jan 13 21:26:37.780286 systemd[1]: sshd@9-10.128.0.101:22-147.75.109.163:52396.service: Deactivated successfully. Jan 13 21:26:37.786621 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:26:37.795010 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:26:37.797386 systemd-logind[1439]: Removed session 10. Jan 13 21:26:37.870467 systemd[1]: run-containerd-runc-k8s.io-f82fa8931e177ba0b382ae5382d450f11d4190bc97dbc415cad4b149d0f3e72d-runc.kQ2CT1.mount: Deactivated successfully. Jan 13 21:26:37.886961 systemd[1]: Started cri-containerd-f82fa8931e177ba0b382ae5382d450f11d4190bc97dbc415cad4b149d0f3e72d.scope - libcontainer container f82fa8931e177ba0b382ae5382d450f11d4190bc97dbc415cad4b149d0f3e72d. Jan 13 21:26:37.972074 containerd[1459]: time="2025-01-13T21:26:37.970838130Z" level=info msg="StartContainer for \"f82fa8931e177ba0b382ae5382d450f11d4190bc97dbc415cad4b149d0f3e72d\" returns successfully" Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:37.952 [INFO][4372] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:37.954 [INFO][4372] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" iface="eth0" netns="/var/run/netns/cni-4b9b31ed-5250-9344-27f5-9d5e58c79ed3" Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:37.957 [INFO][4372] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" iface="eth0" netns="/var/run/netns/cni-4b9b31ed-5250-9344-27f5-9d5e58c79ed3" Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:37.958 [INFO][4372] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" iface="eth0" netns="/var/run/netns/cni-4b9b31ed-5250-9344-27f5-9d5e58c79ed3" Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:37.958 [INFO][4372] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:37.958 [INFO][4372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:38.103 [INFO][4439] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" HandleID="k8s-pod-network.2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:38.103 [INFO][4439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:38.103 [INFO][4439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:38.138 [WARNING][4439] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" HandleID="k8s-pod-network.2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:38.138 [INFO][4439] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" HandleID="k8s-pod-network.2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:38.144 [INFO][4439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:38.155735 containerd[1459]: 2025-01-13 21:26:38.150 [INFO][4372] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:38.160086 containerd[1459]: time="2025-01-13T21:26:38.158809602Z" level=info msg="TearDown network for sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\" successfully" Jan 13 21:26:38.160086 containerd[1459]: time="2025-01-13T21:26:38.158858158Z" level=info msg="StopPodSandbox for \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\" returns successfully" Jan 13 21:26:38.160086 containerd[1459]: time="2025-01-13T21:26:38.160059084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6556db8f5f-dtk9h,Uid:9bf7ec82-1e0a-4102-bad0-cba9e9a839cf,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:37.958 [INFO][4370] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:37.958 [INFO][4370] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" iface="eth0" netns="/var/run/netns/cni-6a76561b-8ee1-ce71-f4b3-4ebd828387e6" Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:37.959 [INFO][4370] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" iface="eth0" netns="/var/run/netns/cni-6a76561b-8ee1-ce71-f4b3-4ebd828387e6" Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:37.969 [INFO][4370] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" iface="eth0" netns="/var/run/netns/cni-6a76561b-8ee1-ce71-f4b3-4ebd828387e6" Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:37.970 [INFO][4370] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:37.971 [INFO][4370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:38.116 [INFO][4444] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" HandleID="k8s-pod-network.39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:38.117 [INFO][4444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:38.144 [INFO][4444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:38.168 [WARNING][4444] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" HandleID="k8s-pod-network.39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:38.168 [INFO][4444] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" HandleID="k8s-pod-network.39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:38.171 [INFO][4444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:38.183576 containerd[1459]: 2025-01-13 21:26:38.177 [INFO][4370] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:38.185362 containerd[1459]: time="2025-01-13T21:26:38.183719809Z" level=info msg="TearDown network for sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\" successfully" Jan 13 21:26:38.185362 containerd[1459]: time="2025-01-13T21:26:38.183756492Z" level=info msg="StopPodSandbox for \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\" returns successfully" Jan 13 21:26:38.185362 containerd[1459]: time="2025-01-13T21:26:38.184862089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-brgw4,Uid:c6337092-d429-49f9-9c09-de05379de9a5,Namespace:calico-system,Attempt:1,}" Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.024 [INFO][4371] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.024 [INFO][4371] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" iface="eth0" netns="/var/run/netns/cni-c254f145-7066-5a1b-c8ec-ba3232fe1814" Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.025 [INFO][4371] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" iface="eth0" netns="/var/run/netns/cni-c254f145-7066-5a1b-c8ec-ba3232fe1814" Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.026 [INFO][4371] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" iface="eth0" netns="/var/run/netns/cni-c254f145-7066-5a1b-c8ec-ba3232fe1814" Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.026 [INFO][4371] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.026 [INFO][4371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.161 [INFO][4454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" HandleID="k8s-pod-network.528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.162 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.172 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.193 [WARNING][4454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" HandleID="k8s-pod-network.528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.193 [INFO][4454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" HandleID="k8s-pod-network.528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.198 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:38.217617 containerd[1459]: 2025-01-13 21:26:38.202 [INFO][4371] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:38.218398 containerd[1459]: time="2025-01-13T21:26:38.217623146Z" level=info msg="TearDown network for sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\" successfully" Jan 13 21:26:38.218398 containerd[1459]: time="2025-01-13T21:26:38.217755538Z" level=info msg="StopPodSandbox for \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\" returns successfully" Jan 13 21:26:38.220194 containerd[1459]: time="2025-01-13T21:26:38.219558301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gfh9x,Uid:890d7497-a5c2-420a-b9d1-ef249860cf9d,Namespace:kube-system,Attempt:1,}" Jan 13 21:26:38.526037 systemd-networkd[1374]: caliad389243c82: Link UP Jan 13 21:26:38.526363 systemd-networkd[1374]: caliad389243c82: Gained carrier Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.279 [INFO][4465] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0 calico-apiserver-6556db8f5f- calico-apiserver 9bf7ec82-1e0a-4102-bad0-cba9e9a839cf 847 0 2025-01-13 21:26:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6556db8f5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal calico-apiserver-6556db8f5f-dtk9h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliad389243c82 [] []}} ContainerID="4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-dtk9h" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.280 [INFO][4465] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-dtk9h" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.415 [INFO][4498] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" HandleID="k8s-pod-network.4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.441 [INFO][4498] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" HandleID="k8s-pod-network.4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", "pod":"calico-apiserver-6556db8f5f-dtk9h", "timestamp":"2025-01-13 21:26:38.415971815 +0000 UTC"}, Hostname:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.441 [INFO][4498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.442 [INFO][4498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.442 [INFO][4498] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal' Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.445 [INFO][4498] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.453 [INFO][4498] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.464 [INFO][4498] ipam/ipam.go 489: Trying affinity for 192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.468 [INFO][4498] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.475 [INFO][4498] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.475 [INFO][4498] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.480 [INFO][4498] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55 Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.492 [INFO][4498] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.509 [INFO][4498] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.196/26] block=192.168.91.192/26 handle="k8s-pod-network.4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.509 [INFO][4498] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.196/26] handle="k8s-pod-network.4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.509 [INFO][4498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:38.583544 containerd[1459]: 2025-01-13 21:26:38.509 [INFO][4498] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.196/26] IPv6=[] ContainerID="4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" HandleID="k8s-pod-network.4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:38.586510 containerd[1459]: 2025-01-13 21:26:38.519 [INFO][4465] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-dtk9h" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0", GenerateName:"calico-apiserver-6556db8f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bf7ec82-1e0a-4102-bad0-cba9e9a839cf", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6556db8f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-6556db8f5f-dtk9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad389243c82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:38.586510 containerd[1459]: 2025-01-13 21:26:38.519 [INFO][4465] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.196/32] ContainerID="4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-dtk9h" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:38.586510 containerd[1459]: 2025-01-13 21:26:38.520 [INFO][4465] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad389243c82 ContainerID="4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-dtk9h" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:38.586510 containerd[1459]: 2025-01-13 21:26:38.534 [INFO][4465] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-dtk9h" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:38.586510 containerd[1459]: 2025-01-13 21:26:38.538 [INFO][4465] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-dtk9h" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0", GenerateName:"calico-apiserver-6556db8f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bf7ec82-1e0a-4102-bad0-cba9e9a839cf", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6556db8f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55", Pod:"calico-apiserver-6556db8f5f-dtk9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad389243c82", MAC:"5a:25:82:0e:93:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:38.586510 containerd[1459]: 2025-01-13 21:26:38.576 [INFO][4465] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55" Namespace="calico-apiserver" Pod="calico-apiserver-6556db8f5f-dtk9h" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:38.663124 systemd-networkd[1374]: cali580a536ed16: Link UP Jan 13 21:26:38.664740 systemd-networkd[1374]: cali580a536ed16: Gained carrier Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.383 [INFO][4486] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0 coredns-7db6d8ff4d- kube-system 890d7497-a5c2-420a-b9d1-ef249860cf9d 849 0 2025-01-13 21:26:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal coredns-7db6d8ff4d-gfh9x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali580a536ed16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gfh9x" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.383 [INFO][4486] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gfh9x" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.507 [INFO][4506] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" HandleID="k8s-pod-network.ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.561 [INFO][4506] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" HandleID="k8s-pod-network.ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011aa70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-gfh9x", "timestamp":"2025-01-13 21:26:38.507259142 +0000 UTC"}, Hostname:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.562 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.564 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.564 [INFO][4506] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal' Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.566 [INFO][4506] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.577 [INFO][4506] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.587 [INFO][4506] ipam/ipam.go 489: Trying affinity for 192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.591 [INFO][4506] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.597 [INFO][4506] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.598 [INFO][4506] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.600 [INFO][4506] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.612 [INFO][4506] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.627 [INFO][4506] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.197/26] block=192.168.91.192/26 handle="k8s-pod-network.ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.629 [INFO][4506] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.197/26] handle="k8s-pod-network.ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.629 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:38.752757 containerd[1459]: 2025-01-13 21:26:38.629 [INFO][4506] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.197/26] IPv6=[] ContainerID="ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" HandleID="k8s-pod-network.ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:38.761268 containerd[1459]: 2025-01-13 21:26:38.637 [INFO][4486] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gfh9x" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"890d7497-a5c2-420a-b9d1-ef249860cf9d", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-gfh9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali580a536ed16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:38.761268 containerd[1459]: 2025-01-13 21:26:38.637 [INFO][4486] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.197/32] ContainerID="ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gfh9x" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:38.761268 containerd[1459]: 2025-01-13 21:26:38.638 [INFO][4486] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali580a536ed16 ContainerID="ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gfh9x" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:38.761268 containerd[1459]: 2025-01-13 21:26:38.670 [INFO][4486] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gfh9x" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:38.761268 containerd[1459]: 2025-01-13 21:26:38.673 [INFO][4486] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gfh9x" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"890d7497-a5c2-420a-b9d1-ef249860cf9d", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f", Pod:"coredns-7db6d8ff4d-gfh9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali580a536ed16", MAC:"4a:0e:a1:8a:aa:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:38.761268 containerd[1459]: 2025-01-13 21:26:38.735 [INFO][4486] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gfh9x" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:38.769756 systemd[1]: run-netns-cni\x2d6a76561b\x2d8ee1\x2dce71\x2df4b3\x2d4ebd828387e6.mount: Deactivated successfully. Jan 13 21:26:38.771977 systemd[1]: run-netns-cni\x2dc254f145\x2d7066\x2d5a1b\x2dc8ec\x2dba3232fe1814.mount: Deactivated successfully. Jan 13 21:26:38.772112 systemd[1]: run-netns-cni\x2d4b9b31ed\x2d5250\x2d9344\x2d27f5\x2d9d5e58c79ed3.mount: Deactivated successfully. Jan 13 21:26:38.792745 systemd-networkd[1374]: calif4fe998e4da: Gained IPv6LL Jan 13 21:26:38.816077 containerd[1459]: time="2025-01-13T21:26:38.814912839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:38.816077 containerd[1459]: time="2025-01-13T21:26:38.815042334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:38.816077 containerd[1459]: time="2025-01-13T21:26:38.815064833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:38.816077 containerd[1459]: time="2025-01-13T21:26:38.815399405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:38.925006 systemd[1]: Started cri-containerd-4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55.scope - libcontainer container 4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55. Jan 13 21:26:38.957510 systemd-networkd[1374]: calid1c7d4c55e3: Link UP Jan 13 21:26:38.960085 systemd[1]: run-containerd-runc-k8s.io-4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55-runc.KxVeDD.mount: Deactivated successfully. Jan 13 21:26:38.966021 systemd-networkd[1374]: calid1c7d4c55e3: Gained carrier Jan 13 21:26:39.002216 containerd[1459]: time="2025-01-13T21:26:39.000858908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:39.002216 containerd[1459]: time="2025-01-13T21:26:39.000985505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:39.002216 containerd[1459]: time="2025-01-13T21:26:39.001016283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:39.002216 containerd[1459]: time="2025-01-13T21:26:39.001169631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.403 [INFO][4476] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0 csi-node-driver- calico-system c6337092-d429-49f9-9c09-de05379de9a5 846 0 2025-01-13 21:26:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal csi-node-driver-brgw4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid1c7d4c55e3 [] []}} ContainerID="a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" Namespace="calico-system" Pod="csi-node-driver-brgw4" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.403 [INFO][4476] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" Namespace="calico-system" Pod="csi-node-driver-brgw4" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.534 [INFO][4510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" HandleID="k8s-pod-network.a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.570 [INFO][4510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" HandleID="k8s-pod-network.a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039a680), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", "pod":"csi-node-driver-brgw4", "timestamp":"2025-01-13 21:26:38.534386975 +0000 UTC"}, Hostname:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.570 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.634 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.634 [INFO][4510] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal' Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.677 [INFO][4510] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.725 [INFO][4510] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.755 [INFO][4510] ipam/ipam.go 489: Trying affinity for 192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.773 [INFO][4510] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.817 [INFO][4510] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.824 [INFO][4510] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.840 [INFO][4510] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.863 [INFO][4510] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.901 [INFO][4510] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.198/26] block=192.168.91.192/26 handle="k8s-pod-network.a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.901 [INFO][4510] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.198/26] handle="k8s-pod-network.a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" host="ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal" Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.902 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:39.052183 containerd[1459]: 2025-01-13 21:26:38.902 [INFO][4510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.198/26] IPv6=[] ContainerID="a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" HandleID="k8s-pod-network.a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:39.058014 containerd[1459]: 2025-01-13 21:26:38.907 [INFO][4476] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" Namespace="calico-system" Pod="csi-node-driver-brgw4" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6337092-d429-49f9-9c09-de05379de9a5", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-brgw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1c7d4c55e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:39.058014 containerd[1459]: 2025-01-13 21:26:38.910 [INFO][4476] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.198/32] ContainerID="a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" Namespace="calico-system" Pod="csi-node-driver-brgw4" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:39.058014 containerd[1459]: 2025-01-13 21:26:38.910 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1c7d4c55e3 ContainerID="a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" Namespace="calico-system" Pod="csi-node-driver-brgw4" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:39.058014 containerd[1459]: 2025-01-13 21:26:38.963 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" Namespace="calico-system" Pod="csi-node-driver-brgw4" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:39.058014 containerd[1459]: 2025-01-13 21:26:38.980 [INFO][4476] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" Namespace="calico-system" Pod="csi-node-driver-brgw4" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6337092-d429-49f9-9c09-de05379de9a5", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa", Pod:"csi-node-driver-brgw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1c7d4c55e3", MAC:"8a:ef:12:fa:c0:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:39.058014 containerd[1459]: 2025-01-13 21:26:39.032 [INFO][4476] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa" Namespace="calico-system" Pod="csi-node-driver-brgw4" WorkloadEndpoint="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:39.100670 kubelet[2666]: I0113 21:26:39.096685 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6cdjn" podStartSLOduration=38.096617852 podStartE2EDuration="38.096617852s" podCreationTimestamp="2025-01-13 21:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:26:39.043492819 +0000 UTC m=+52.751347940" watchObservedRunningTime="2025-01-13 21:26:39.096617852 +0000 UTC m=+52.804472976" Jan 13 21:26:39.172991 systemd[1]: Started cri-containerd-ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f.scope - libcontainer container ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f. Jan 13 21:26:39.234177 containerd[1459]: time="2025-01-13T21:26:39.232615548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:39.234177 containerd[1459]: time="2025-01-13T21:26:39.232724901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:39.234177 containerd[1459]: time="2025-01-13T21:26:39.232755727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:39.234177 containerd[1459]: time="2025-01-13T21:26:39.232910629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:39.302168 systemd[1]: Started cri-containerd-a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa.scope - libcontainer container a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa. Jan 13 21:26:39.349983 containerd[1459]: time="2025-01-13T21:26:39.348384482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gfh9x,Uid:890d7497-a5c2-420a-b9d1-ef249860cf9d,Namespace:kube-system,Attempt:1,} returns sandbox id \"ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f\"" Jan 13 21:26:39.368555 containerd[1459]: time="2025-01-13T21:26:39.368473540Z" level=info msg="CreateContainer within sandbox \"ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:26:39.470383 containerd[1459]: time="2025-01-13T21:26:39.470314938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6556db8f5f-dtk9h,Uid:9bf7ec82-1e0a-4102-bad0-cba9e9a839cf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55\"" Jan 13 21:26:39.482122 containerd[1459]: time="2025-01-13T21:26:39.481956633Z" level=info msg="CreateContainer within sandbox \"ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcd4d68377275a6ad42e3172a47efc99ff23c810836d2862fe11a0538b3cbb4a\"" Jan 13 21:26:39.484527 containerd[1459]: time="2025-01-13T21:26:39.484424945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-brgw4,Uid:c6337092-d429-49f9-9c09-de05379de9a5,Namespace:calico-system,Attempt:1,} returns sandbox id \"a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa\"" Jan 13 21:26:39.487445 containerd[1459]: time="2025-01-13T21:26:39.485863259Z" level=info msg="StartContainer for \"dcd4d68377275a6ad42e3172a47efc99ff23c810836d2862fe11a0538b3cbb4a\"" Jan 13 21:26:39.603081 systemd[1]: Started cri-containerd-dcd4d68377275a6ad42e3172a47efc99ff23c810836d2862fe11a0538b3cbb4a.scope - libcontainer container dcd4d68377275a6ad42e3172a47efc99ff23c810836d2862fe11a0538b3cbb4a. Jan 13 21:26:39.681623 containerd[1459]: time="2025-01-13T21:26:39.681549023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:39.684266 containerd[1459]: time="2025-01-13T21:26:39.683985224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 21:26:39.687531 containerd[1459]: time="2025-01-13T21:26:39.686512017Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:39.695630 containerd[1459]: time="2025-01-13T21:26:39.695355795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:39.699396 containerd[1459]: time="2025-01-13T21:26:39.695840759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.718887947s" Jan 13 21:26:39.699770 containerd[1459]: time="2025-01-13T21:26:39.699406780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:26:39.704118 containerd[1459]: time="2025-01-13T21:26:39.704064105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:26:39.705263 containerd[1459]: time="2025-01-13T21:26:39.705095028Z" level=info msg="CreateContainer within sandbox \"36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:26:39.762743 containerd[1459]: time="2025-01-13T21:26:39.760583058Z" level=info msg="StartContainer for \"dcd4d68377275a6ad42e3172a47efc99ff23c810836d2862fe11a0538b3cbb4a\" returns successfully" Jan 13 21:26:39.765409 containerd[1459]: time="2025-01-13T21:26:39.765341398Z" level=info msg="CreateContainer within sandbox \"36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"909f915cb8b247f1e4caed70c913114819caf676e963baa68cb8b0bf9dd05a94\"" Jan 13 21:26:39.768771 containerd[1459]: time="2025-01-13T21:26:39.766213649Z" level=info msg="StartContainer for \"909f915cb8b247f1e4caed70c913114819caf676e963baa68cb8b0bf9dd05a94\"" Jan 13 21:26:39.849430 systemd[1]: run-containerd-runc-k8s.io-909f915cb8b247f1e4caed70c913114819caf676e963baa68cb8b0bf9dd05a94-runc.MK4vcH.mount: Deactivated successfully. Jan 13 21:26:39.862287 systemd[1]: Started cri-containerd-909f915cb8b247f1e4caed70c913114819caf676e963baa68cb8b0bf9dd05a94.scope - libcontainer container 909f915cb8b247f1e4caed70c913114819caf676e963baa68cb8b0bf9dd05a94. Jan 13 21:26:39.944948 systemd-networkd[1374]: caliad389243c82: Gained IPv6LL Jan 13 21:26:39.954218 containerd[1459]: time="2025-01-13T21:26:39.954096425Z" level=info msg="StartContainer for \"909f915cb8b247f1e4caed70c913114819caf676e963baa68cb8b0bf9dd05a94\" returns successfully" Jan 13 21:26:40.018141 kubelet[2666]: I0113 21:26:40.018019 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6556db8f5f-j25nm" podStartSLOduration=24.29111962 podStartE2EDuration="29.017987658s" podCreationTimestamp="2025-01-13 21:26:11 +0000 UTC" firstStartedPulling="2025-01-13 21:26:34.973812912 +0000 UTC m=+48.681668024" lastFinishedPulling="2025-01-13 21:26:39.700680834 +0000 UTC m=+53.408536062" observedRunningTime="2025-01-13 21:26:40.017072795 +0000 UTC m=+53.724927919" watchObservedRunningTime="2025-01-13 21:26:40.017987658 +0000 UTC m=+53.725842762" Jan 13 21:26:40.051533 kubelet[2666]: I0113 21:26:40.051338 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gfh9x" podStartSLOduration=39.05130581 podStartE2EDuration="39.05130581s" podCreationTimestamp="2025-01-13 21:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:26:40.051147665 +0000 UTC m=+53.759002806" watchObservedRunningTime="2025-01-13 21:26:40.05130581 +0000 UTC m=+53.759160931" Jan 13 21:26:40.583189 systemd-networkd[1374]: cali580a536ed16: Gained IPv6LL Jan 13 21:26:40.649627 systemd-networkd[1374]: calid1c7d4c55e3: Gained IPv6LL Jan 13 21:26:41.005083 kubelet[2666]: I0113 21:26:41.005007 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:26:41.890165 containerd[1459]: time="2025-01-13T21:26:41.889995722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:41.891871 containerd[1459]: time="2025-01-13T21:26:41.891739290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:26:41.894578 containerd[1459]: time="2025-01-13T21:26:41.894530384Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:41.899638 containerd[1459]: time="2025-01-13T21:26:41.898288591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:41.899638 containerd[1459]: time="2025-01-13T21:26:41.899427838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.1952983s" Jan 13 21:26:41.899638 containerd[1459]: time="2025-01-13T21:26:41.899479473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:26:41.902500 containerd[1459]: time="2025-01-13T21:26:41.902422739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:26:41.933231 containerd[1459]: time="2025-01-13T21:26:41.933139978Z" level=info msg="CreateContainer within sandbox \"efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:26:41.956341 containerd[1459]: time="2025-01-13T21:26:41.956248867Z" level=info msg="CreateContainer within sandbox \"efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c9c8f9428dde41622be3d0f626d3e5cc0afa875e642f2f63a90277287762f8a8\"" Jan 13 21:26:41.957625 containerd[1459]: time="2025-01-13T21:26:41.957579119Z" level=info msg="StartContainer for \"c9c8f9428dde41622be3d0f626d3e5cc0afa875e642f2f63a90277287762f8a8\"" Jan 13 21:26:42.031049 systemd[1]: Started cri-containerd-c9c8f9428dde41622be3d0f626d3e5cc0afa875e642f2f63a90277287762f8a8.scope - libcontainer container c9c8f9428dde41622be3d0f626d3e5cc0afa875e642f2f63a90277287762f8a8. Jan 13 21:26:42.111083 containerd[1459]: time="2025-01-13T21:26:42.110683291Z" level=info msg="StartContainer for \"c9c8f9428dde41622be3d0f626d3e5cc0afa875e642f2f63a90277287762f8a8\" returns successfully" Jan 13 21:26:42.116361 containerd[1459]: time="2025-01-13T21:26:42.116195750Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:42.117841 containerd[1459]: time="2025-01-13T21:26:42.117682269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 21:26:42.127932 containerd[1459]: time="2025-01-13T21:26:42.127771033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 225.000181ms" Jan 13 21:26:42.127932 containerd[1459]: time="2025-01-13T21:26:42.127864160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:26:42.133088 containerd[1459]: time="2025-01-13T21:26:42.133028138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:26:42.135010 containerd[1459]: time="2025-01-13T21:26:42.134824354Z" level=info msg="CreateContainer within sandbox \"4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:26:42.169689 containerd[1459]: time="2025-01-13T21:26:42.169450914Z" level=info msg="CreateContainer within sandbox \"4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9eaca44ef752f978aa2b558fe83fbe3f7ef3b496bb2a2c3df9caa6659604e299\"" Jan 13 21:26:42.174306 containerd[1459]: time="2025-01-13T21:26:42.174233818Z" level=info msg="StartContainer for \"9eaca44ef752f978aa2b558fe83fbe3f7ef3b496bb2a2c3df9caa6659604e299\"" Jan 13 21:26:42.247132 systemd[1]: Started cri-containerd-9eaca44ef752f978aa2b558fe83fbe3f7ef3b496bb2a2c3df9caa6659604e299.scope - libcontainer container 9eaca44ef752f978aa2b558fe83fbe3f7ef3b496bb2a2c3df9caa6659604e299. Jan 13 21:26:42.363141 containerd[1459]: time="2025-01-13T21:26:42.362994530Z" level=info msg="StartContainer for \"9eaca44ef752f978aa2b558fe83fbe3f7ef3b496bb2a2c3df9caa6659604e299\" returns successfully" Jan 13 21:26:42.803481 ntpd[1426]: Listen normally on 7 vxlan.calico 192.168.91.192:123 Jan 13 21:26:42.803626 ntpd[1426]: Listen normally on 8 vxlan.calico [fe80::648b:ceff:fef5:c82%4]:123 Jan 13 21:26:42.807901 ntpd[1426]: 13 Jan 21:26:42 ntpd[1426]: Listen normally on 7 vxlan.calico 192.168.91.192:123 Jan 13 21:26:42.807901 ntpd[1426]: 13 Jan 21:26:42 ntpd[1426]: Listen normally on 8 vxlan.calico [fe80::648b:ceff:fef5:c82%4]:123 Jan 13 21:26:42.807901 ntpd[1426]: 13 Jan 21:26:42 ntpd[1426]: Listen normally on 9 cali704d7ebc52d [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 21:26:42.807901 ntpd[1426]: 13 Jan 21:26:42 ntpd[1426]: Listen normally on 10 cali1bceffb2ec1 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:26:42.807901 ntpd[1426]: 13 Jan 21:26:42 ntpd[1426]: Listen normally on 11 calif4fe998e4da [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 21:26:42.807901 ntpd[1426]: 13 Jan 21:26:42 ntpd[1426]: Listen normally on 12 caliad389243c82 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 21:26:42.807901 ntpd[1426]: 13 Jan 21:26:42 ntpd[1426]: Listen normally on 13 cali580a536ed16 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 21:26:42.807901 ntpd[1426]: 13 Jan 21:26:42 ntpd[1426]: Listen normally on 14 calid1c7d4c55e3 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:26:42.803756 ntpd[1426]: Listen normally on 9 cali704d7ebc52d [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 21:26:42.803819 ntpd[1426]: Listen normally on 10 cali1bceffb2ec1 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:26:42.803959 ntpd[1426]: Listen normally on 11 calif4fe998e4da [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 21:26:42.804121 ntpd[1426]: Listen normally on 12 caliad389243c82 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 21:26:42.804196 ntpd[1426]: Listen normally on 13 cali580a536ed16 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 21:26:42.804268 ntpd[1426]: Listen normally on 14 calid1c7d4c55e3 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:26:42.826026 systemd[1]: Started sshd@10-10.128.0.101:22-147.75.109.163:40774.service - OpenSSH per-connection server daemon (147.75.109.163:40774). Jan 13 21:26:42.931608 systemd[1]: run-containerd-runc-k8s.io-c9c8f9428dde41622be3d0f626d3e5cc0afa875e642f2f63a90277287762f8a8-runc.JYb7JM.mount: Deactivated successfully. Jan 13 21:26:43.101372 kubelet[2666]: I0113 21:26:43.100022 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6556db8f5f-dtk9h" podStartSLOduration=29.445879963 podStartE2EDuration="32.09999023s" podCreationTimestamp="2025-01-13 21:26:11 +0000 UTC" firstStartedPulling="2025-01-13 21:26:39.475979353 +0000 UTC m=+53.183834464" lastFinishedPulling="2025-01-13 21:26:42.130089621 +0000 UTC m=+55.837944731" observedRunningTime="2025-01-13 21:26:43.062230761 +0000 UTC m=+56.770085883" watchObservedRunningTime="2025-01-13 21:26:43.09999023 +0000 UTC m=+56.807845351" Jan 13 21:26:43.108113 kubelet[2666]: I0113 21:26:43.106628 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dfb8cc8cd-swbqm" podStartSLOduration=25.17023229 podStartE2EDuration="31.10653673s" podCreationTimestamp="2025-01-13 21:26:12 +0000 UTC" firstStartedPulling="2025-01-13 21:26:35.965314878 +0000 UTC m=+49.673169986" lastFinishedPulling="2025-01-13 21:26:41.901619314 +0000 UTC m=+55.609474426" observedRunningTime="2025-01-13 21:26:43.105508072 +0000 UTC m=+56.813363183" watchObservedRunningTime="2025-01-13 21:26:43.10653673 +0000 UTC m=+56.814391854" Jan 13 21:26:43.145063 systemd[1]: run-containerd-runc-k8s.io-c9c8f9428dde41622be3d0f626d3e5cc0afa875e642f2f63a90277287762f8a8-runc.gJfp3A.mount: Deactivated successfully. Jan 13 21:26:43.168165 sshd[4859]: Accepted publickey for core from 147.75.109.163 port 40774 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:26:43.174860 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:43.188915 systemd-logind[1439]: New session 11 of user core. Jan 13 21:26:43.191938 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:26:43.598220 sshd[4859]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:43.606024 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:26:43.607563 systemd[1]: sshd@10-10.128.0.101:22-147.75.109.163:40774.service: Deactivated successfully. Jan 13 21:26:43.608135 containerd[1459]: time="2025-01-13T21:26:43.608086736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:43.612846 containerd[1459]: time="2025-01-13T21:26:43.612134685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:26:43.614212 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:26:43.616418 systemd-logind[1439]: Removed session 11. Jan 13 21:26:43.617138 containerd[1459]: time="2025-01-13T21:26:43.616869863Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:43.623775 containerd[1459]: time="2025-01-13T21:26:43.622924004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:43.624947 containerd[1459]: time="2025-01-13T21:26:43.624903188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.491805641s" Jan 13 21:26:43.625764 containerd[1459]: time="2025-01-13T21:26:43.624953415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:26:43.629721 containerd[1459]: time="2025-01-13T21:26:43.629021305Z" level=info msg="CreateContainer within sandbox \"a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:26:43.657824 containerd[1459]: time="2025-01-13T21:26:43.657069675Z" level=info msg="CreateContainer within sandbox \"a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c9b8b15b06b06d45a45f55aed48f18256d3a2fa0db0fa478fe146fc9740b9c66\"" Jan 13 21:26:43.660717 containerd[1459]: time="2025-01-13T21:26:43.660181629Z" level=info msg="StartContainer for \"c9b8b15b06b06d45a45f55aed48f18256d3a2fa0db0fa478fe146fc9740b9c66\"" Jan 13 21:26:43.719940 systemd[1]: Started cri-containerd-c9b8b15b06b06d45a45f55aed48f18256d3a2fa0db0fa478fe146fc9740b9c66.scope - libcontainer container c9b8b15b06b06d45a45f55aed48f18256d3a2fa0db0fa478fe146fc9740b9c66. Jan 13 21:26:43.783331 containerd[1459]: time="2025-01-13T21:26:43.782052336Z" level=info msg="StartContainer for \"c9b8b15b06b06d45a45f55aed48f18256d3a2fa0db0fa478fe146fc9740b9c66\" returns successfully" Jan 13 21:26:43.785445 containerd[1459]: time="2025-01-13T21:26:43.785341858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:26:44.051073 kubelet[2666]: I0113 21:26:44.051024 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:26:45.096326 containerd[1459]: time="2025-01-13T21:26:45.096248090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:45.097749 containerd[1459]: time="2025-01-13T21:26:45.097654695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:26:45.099471 containerd[1459]: time="2025-01-13T21:26:45.099371576Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:45.104114 containerd[1459]: time="2025-01-13T21:26:45.104038655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:45.105717 containerd[1459]: time="2025-01-13T21:26:45.105438993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.319950692s" Jan 13 21:26:45.105717 containerd[1459]: time="2025-01-13T21:26:45.105515222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:26:45.109586 containerd[1459]: time="2025-01-13T21:26:45.109281964Z" level=info msg="CreateContainer within sandbox \"a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:26:45.138620 containerd[1459]: time="2025-01-13T21:26:45.138550293Z" level=info msg="CreateContainer within sandbox \"a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1b02de6b936cf0e49a263f7267168ce540aba0902fae055684821a25eac9b5f5\"" Jan 13 21:26:45.139662 containerd[1459]: time="2025-01-13T21:26:45.139616817Z" level=info msg="StartContainer for \"1b02de6b936cf0e49a263f7267168ce540aba0902fae055684821a25eac9b5f5\"" Jan 13 21:26:45.140909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562328404.mount: Deactivated successfully. Jan 13 21:26:45.222966 systemd[1]: Started cri-containerd-1b02de6b936cf0e49a263f7267168ce540aba0902fae055684821a25eac9b5f5.scope - libcontainer container 1b02de6b936cf0e49a263f7267168ce540aba0902fae055684821a25eac9b5f5. Jan 13 21:26:45.269410 containerd[1459]: time="2025-01-13T21:26:45.268972238Z" level=info msg="StartContainer for \"1b02de6b936cf0e49a263f7267168ce540aba0902fae055684821a25eac9b5f5\" returns successfully" Jan 13 21:26:45.643939 kubelet[2666]: I0113 21:26:45.643617 2666 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:26:45.643939 kubelet[2666]: I0113 21:26:45.643668 2666 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:26:46.487813 containerd[1459]: time="2025-01-13T21:26:46.487409748Z" level=info msg="StopPodSandbox for \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\"" Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.567 [WARNING][4990] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6337092-d429-49f9-9c09-de05379de9a5", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa", Pod:"csi-node-driver-brgw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1c7d4c55e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.567 [INFO][4990] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.568 [INFO][4990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" iface="eth0" netns="" Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.568 [INFO][4990] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.568 [INFO][4990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.596 [INFO][4996] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" HandleID="k8s-pod-network.39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.596 [INFO][4996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.596 [INFO][4996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.605 [WARNING][4996] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" HandleID="k8s-pod-network.39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.605 [INFO][4996] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" HandleID="k8s-pod-network.39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.607 [INFO][4996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:46.611681 containerd[1459]: 2025-01-13 21:26:46.608 [INFO][4990] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:46.611681 containerd[1459]: time="2025-01-13T21:26:46.611495085Z" level=info msg="TearDown network for sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\" successfully" Jan 13 21:26:46.611681 containerd[1459]: time="2025-01-13T21:26:46.611531325Z" level=info msg="StopPodSandbox for \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\" returns successfully" Jan 13 21:26:46.614094 containerd[1459]: time="2025-01-13T21:26:46.613071790Z" level=info msg="RemovePodSandbox for \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\"" Jan 13 21:26:46.614094 containerd[1459]: time="2025-01-13T21:26:46.613116350Z" level=info msg="Forcibly stopping sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\"" Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.669 [WARNING][5014] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6337092-d429-49f9-9c09-de05379de9a5", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"a55815e5d712bc49e9412e24d78ffebc7894c7ecf97b4272c2620e6daf5286aa", Pod:"csi-node-driver-brgw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1c7d4c55e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.669 [INFO][5014] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.669 [INFO][5014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" iface="eth0" netns="" Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.669 [INFO][5014] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.669 [INFO][5014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.696 [INFO][5020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" HandleID="k8s-pod-network.39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.696 [INFO][5020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.696 [INFO][5020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.703 [WARNING][5020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" HandleID="k8s-pod-network.39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.703 [INFO][5020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" HandleID="k8s-pod-network.39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-csi--node--driver--brgw4-eth0" Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.705 [INFO][5020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:46.709315 containerd[1459]: 2025-01-13 21:26:46.707 [INFO][5014] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a" Jan 13 21:26:46.709315 containerd[1459]: time="2025-01-13T21:26:46.709307576Z" level=info msg="TearDown network for sandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\" successfully" Jan 13 21:26:46.714994 containerd[1459]: time="2025-01-13T21:26:46.714852790Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:26:46.714994 containerd[1459]: time="2025-01-13T21:26:46.714959705Z" level=info msg="RemovePodSandbox \"39a76aba6f5f2a683e362a58e900912c8f09ef23c88d809af01d61e2742b959a\" returns successfully" Jan 13 21:26:46.715987 containerd[1459]: time="2025-01-13T21:26:46.715937914Z" level=info msg="StopPodSandbox for \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\"" Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.767 [WARNING][5038] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0", GenerateName:"calico-apiserver-6556db8f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bf7ec82-1e0a-4102-bad0-cba9e9a839cf", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6556db8f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55", Pod:"calico-apiserver-6556db8f5f-dtk9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad389243c82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.768 [INFO][5038] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.768 [INFO][5038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" iface="eth0" netns="" Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.768 [INFO][5038] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.768 [INFO][5038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.799 [INFO][5045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" HandleID="k8s-pod-network.2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.799 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.799 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.807 [WARNING][5045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" HandleID="k8s-pod-network.2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.807 [INFO][5045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" HandleID="k8s-pod-network.2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.809 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:46.812091 containerd[1459]: 2025-01-13 21:26:46.810 [INFO][5038] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:46.813048 containerd[1459]: time="2025-01-13T21:26:46.812172609Z" level=info msg="TearDown network for sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\" successfully" Jan 13 21:26:46.813048 containerd[1459]: time="2025-01-13T21:26:46.812217688Z" level=info msg="StopPodSandbox for \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\" returns successfully" Jan 13 21:26:46.813230 containerd[1459]: time="2025-01-13T21:26:46.813128181Z" level=info msg="RemovePodSandbox for \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\"" Jan 13 21:26:46.813230 containerd[1459]: time="2025-01-13T21:26:46.813174253Z" level=info msg="Forcibly stopping sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\"" Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.869 [WARNING][5063] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0", GenerateName:"calico-apiserver-6556db8f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bf7ec82-1e0a-4102-bad0-cba9e9a839cf", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6556db8f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"4f48abc1859b2f5d565807717c532cffb331957a0b5106953175ceed48be9d55", Pod:"calico-apiserver-6556db8f5f-dtk9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad389243c82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.870 [INFO][5063] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.870 [INFO][5063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" iface="eth0" netns="" Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.870 [INFO][5063] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.870 [INFO][5063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.941 [INFO][5069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" HandleID="k8s-pod-network.2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.941 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.941 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.957 [WARNING][5069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" HandleID="k8s-pod-network.2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.957 [INFO][5069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" HandleID="k8s-pod-network.2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--dtk9h-eth0" Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.968 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:46.985620 containerd[1459]: 2025-01-13 21:26:46.973 [INFO][5063] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9" Jan 13 21:26:46.989636 containerd[1459]: time="2025-01-13T21:26:46.985825217Z" level=info msg="TearDown network for sandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\" successfully" Jan 13 21:26:47.004188 containerd[1459]: time="2025-01-13T21:26:47.004119482Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:26:47.007902 containerd[1459]: time="2025-01-13T21:26:47.007839268Z" level=info msg="RemovePodSandbox \"2b62706c86527a5e19bef521751bf4ebe9eba9893268bdb384d1fbf981198ca9\" returns successfully" Jan 13 21:26:47.009520 containerd[1459]: time="2025-01-13T21:26:47.009479212Z" level=info msg="StopPodSandbox for \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\"" Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.156 [WARNING][5091] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"890d7497-a5c2-420a-b9d1-ef249860cf9d", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f", Pod:"coredns-7db6d8ff4d-gfh9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali580a536ed16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.157 [INFO][5091] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.157 [INFO][5091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" iface="eth0" netns="" Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.157 [INFO][5091] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.157 [INFO][5091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.188 [INFO][5098] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" HandleID="k8s-pod-network.528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.188 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.188 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.198 [WARNING][5098] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" HandleID="k8s-pod-network.528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.199 [INFO][5098] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" HandleID="k8s-pod-network.528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.201 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:47.205591 containerd[1459]: 2025-01-13 21:26:47.203 [INFO][5091] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:47.207177 containerd[1459]: time="2025-01-13T21:26:47.207097429Z" level=info msg="TearDown network for sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\" successfully" Jan 13 21:26:47.207687 containerd[1459]: time="2025-01-13T21:26:47.207304329Z" level=info msg="StopPodSandbox for \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\" returns successfully" Jan 13 21:26:47.208789 containerd[1459]: time="2025-01-13T21:26:47.208294716Z" level=info msg="RemovePodSandbox for \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\"" Jan 13 21:26:47.208789 containerd[1459]: time="2025-01-13T21:26:47.208346911Z" level=info msg="Forcibly stopping sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\"" Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.273 [WARNING][5116] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"890d7497-a5c2-420a-b9d1-ef249860cf9d", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"ac5e6898448556c4616d4969c144c22544967677949900ced152838dca82497f", Pod:"coredns-7db6d8ff4d-gfh9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali580a536ed16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.274 [INFO][5116] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.274 [INFO][5116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" iface="eth0" netns="" Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.274 [INFO][5116] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.274 [INFO][5116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.307 [INFO][5122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" HandleID="k8s-pod-network.528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.307 [INFO][5122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.308 [INFO][5122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.315 [WARNING][5122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" HandleID="k8s-pod-network.528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.315 [INFO][5122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" HandleID="k8s-pod-network.528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--gfh9x-eth0" Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.318 [INFO][5122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:47.321735 containerd[1459]: 2025-01-13 21:26:47.319 [INFO][5116] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412" Jan 13 21:26:47.322748 containerd[1459]: time="2025-01-13T21:26:47.321872950Z" level=info msg="TearDown network for sandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\" successfully" Jan 13 21:26:47.327951 containerd[1459]: time="2025-01-13T21:26:47.327863051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:26:47.328144 containerd[1459]: time="2025-01-13T21:26:47.327988089Z" level=info msg="RemovePodSandbox \"528c3e0465caf8500fe3e06eb7d77a4a2515db8580667256b611d61b04b97412\" returns successfully" Jan 13 21:26:47.329110 containerd[1459]: time="2025-01-13T21:26:47.329066923Z" level=info msg="StopPodSandbox for \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\"" Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.387 [WARNING][5140] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0", GenerateName:"calico-apiserver-6556db8f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a09ac35e-7d2f-4aff-9b72-388aa54a776e", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6556db8f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031", Pod:"calico-apiserver-6556db8f5f-j25nm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali704d7ebc52d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.387 [INFO][5140] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.387 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" iface="eth0" netns="" Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.387 [INFO][5140] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.387 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.419 [INFO][5146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" HandleID="k8s-pod-network.444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.420 [INFO][5146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.420 [INFO][5146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.429 [WARNING][5146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" HandleID="k8s-pod-network.444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.429 [INFO][5146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" HandleID="k8s-pod-network.444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.431 [INFO][5146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:47.434967 containerd[1459]: 2025-01-13 21:26:47.433 [INFO][5140] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:47.435853 containerd[1459]: time="2025-01-13T21:26:47.435055238Z" level=info msg="TearDown network for sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\" successfully" Jan 13 21:26:47.435853 containerd[1459]: time="2025-01-13T21:26:47.435098515Z" level=info msg="StopPodSandbox for \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\" returns successfully" Jan 13 21:26:47.435969 containerd[1459]: time="2025-01-13T21:26:47.435917798Z" level=info msg="RemovePodSandbox for \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\"" Jan 13 21:26:47.436017 containerd[1459]: time="2025-01-13T21:26:47.435963456Z" level=info msg="Forcibly stopping sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\"" Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.499 [WARNING][5164] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0", GenerateName:"calico-apiserver-6556db8f5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a09ac35e-7d2f-4aff-9b72-388aa54a776e", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6556db8f5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"36b4a12abcbe7d548ea403ceebea90b245dee10f7c662c862d9378e22fc1f031", Pod:"calico-apiserver-6556db8f5f-j25nm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali704d7ebc52d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.500 [INFO][5164] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.500 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" iface="eth0" netns="" Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.500 [INFO][5164] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.500 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.529 [INFO][5170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" HandleID="k8s-pod-network.444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.529 [INFO][5170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.530 [INFO][5170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.538 [WARNING][5170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" HandleID="k8s-pod-network.444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.538 [INFO][5170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" HandleID="k8s-pod-network.444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--apiserver--6556db8f5f--j25nm-eth0" Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.539 [INFO][5170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:47.542789 containerd[1459]: 2025-01-13 21:26:47.541 [INFO][5164] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab" Jan 13 21:26:47.544118 containerd[1459]: time="2025-01-13T21:26:47.542846013Z" level=info msg="TearDown network for sandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\" successfully" Jan 13 21:26:47.548780 containerd[1459]: time="2025-01-13T21:26:47.548651932Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:26:47.549051 containerd[1459]: time="2025-01-13T21:26:47.548809981Z" level=info msg="RemovePodSandbox \"444a2672e0d39e4b385dbacb3e3d63f84c8a1b89149110687364eec45b2e43ab\" returns successfully" Jan 13 21:26:47.549662 containerd[1459]: time="2025-01-13T21:26:47.549623136Z" level=info msg="StopPodSandbox for \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\"" Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.605 [WARNING][5188] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eea4dbd0-48b0-456a-9730-2e0b0b5023a9", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a", Pod:"coredns-7db6d8ff4d-6cdjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4fe998e4da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.605 [INFO][5188] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.606 [INFO][5188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" iface="eth0" netns="" Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.606 [INFO][5188] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.606 [INFO][5188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.638 [INFO][5194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" HandleID="k8s-pod-network.b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.639 [INFO][5194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.639 [INFO][5194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.647 [WARNING][5194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" HandleID="k8s-pod-network.b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.647 [INFO][5194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" HandleID="k8s-pod-network.b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.649 [INFO][5194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:47.652746 containerd[1459]: 2025-01-13 21:26:47.651 [INFO][5188] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:47.654167 containerd[1459]: time="2025-01-13T21:26:47.652883562Z" level=info msg="TearDown network for sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\" successfully" Jan 13 21:26:47.654167 containerd[1459]: time="2025-01-13T21:26:47.652934066Z" level=info msg="StopPodSandbox for \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\" returns successfully" Jan 13 21:26:47.654167 containerd[1459]: time="2025-01-13T21:26:47.653851852Z" level=info msg="RemovePodSandbox for \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\"" Jan 13 21:26:47.654167 containerd[1459]: time="2025-01-13T21:26:47.653902455Z" level=info msg="Forcibly stopping sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\"" Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.711 [WARNING][5212] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eea4dbd0-48b0-456a-9730-2e0b0b5023a9", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"9297e8411d5ac12f951cd7fbfddfd7d9f7deede5552b9aef44f17dddb1c25f0a", Pod:"coredns-7db6d8ff4d-6cdjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4fe998e4da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.711 [INFO][5212] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.711 [INFO][5212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" iface="eth0" netns="" Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.712 [INFO][5212] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.712 [INFO][5212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.746 [INFO][5219] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" HandleID="k8s-pod-network.b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.746 [INFO][5219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.747 [INFO][5219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.755 [WARNING][5219] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" HandleID="k8s-pod-network.b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.755 [INFO][5219] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" HandleID="k8s-pod-network.b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--6cdjn-eth0" Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.757 [INFO][5219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:47.760556 containerd[1459]: 2025-01-13 21:26:47.758 [INFO][5212] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf" Jan 13 21:26:47.760556 containerd[1459]: time="2025-01-13T21:26:47.760508371Z" level=info msg="TearDown network for sandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\" successfully" Jan 13 21:26:47.766890 containerd[1459]: time="2025-01-13T21:26:47.766644832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:26:47.766890 containerd[1459]: time="2025-01-13T21:26:47.766801096Z" level=info msg="RemovePodSandbox \"b20b7969d6989be7d95dbd6f758a390defac7a38ae1cb930d1834fc22dd07bdf\" returns successfully" Jan 13 21:26:47.768296 containerd[1459]: time="2025-01-13T21:26:47.767789015Z" level=info msg="StopPodSandbox for \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\"" Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.825 [WARNING][5238] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0", GenerateName:"calico-kube-controllers-6dfb8cc8cd-", Namespace:"calico-system", SelfLink:"", UID:"fe1c0d21-cbd2-493c-aac5-49c46482135d", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dfb8cc8cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf", Pod:"calico-kube-controllers-6dfb8cc8cd-swbqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1bceffb2ec1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.826 [INFO][5238] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.826 [INFO][5238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" iface="eth0" netns="" Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.826 [INFO][5238] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.826 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.852 [INFO][5244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" HandleID="k8s-pod-network.2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.853 [INFO][5244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.853 [INFO][5244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.861 [WARNING][5244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" HandleID="k8s-pod-network.2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.861 [INFO][5244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" HandleID="k8s-pod-network.2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.865 [INFO][5244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:47.868795 containerd[1459]: 2025-01-13 21:26:47.866 [INFO][5238] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:47.869714 containerd[1459]: time="2025-01-13T21:26:47.868803415Z" level=info msg="TearDown network for sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\" successfully" Jan 13 21:26:47.869714 containerd[1459]: time="2025-01-13T21:26:47.868845003Z" level=info msg="StopPodSandbox for \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\" returns successfully" Jan 13 21:26:47.869714 containerd[1459]: time="2025-01-13T21:26:47.869555426Z" level=info msg="RemovePodSandbox for \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\"" Jan 13 21:26:47.869888 containerd[1459]: time="2025-01-13T21:26:47.869744818Z" level=info msg="Forcibly stopping sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\"" Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.919 [WARNING][5263] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0", GenerateName:"calico-kube-controllers-6dfb8cc8cd-", Namespace:"calico-system", SelfLink:"", UID:"fe1c0d21-cbd2-493c-aac5-49c46482135d", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dfb8cc8cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-a7efd0bedf02392d98fb.c.flatcar-212911.internal", ContainerID:"efd6a70164d57ec26ade3918dbb339b897a00b739810b74ceb2ead083d49e9cf", Pod:"calico-kube-controllers-6dfb8cc8cd-swbqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1bceffb2ec1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.920 [INFO][5263] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.920 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" iface="eth0" netns="" Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.920 [INFO][5263] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.920 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.949 [INFO][5269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" HandleID="k8s-pod-network.2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.949 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.949 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.958 [WARNING][5269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" HandleID="k8s-pod-network.2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.958 [INFO][5269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" HandleID="k8s-pod-network.2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Workload="ci--4081--3--0--a7efd0bedf02392d98fb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dfb8cc8cd--swbqm-eth0" Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.962 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:26:47.965273 containerd[1459]: 2025-01-13 21:26:47.963 [INFO][5263] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad" Jan 13 21:26:47.965957 containerd[1459]: time="2025-01-13T21:26:47.965868369Z" level=info msg="TearDown network for sandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\" successfully" Jan 13 21:26:47.971815 containerd[1459]: time="2025-01-13T21:26:47.971764251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:26:47.971957 containerd[1459]: time="2025-01-13T21:26:47.971869405Z" level=info msg="RemovePodSandbox \"2edd816fbc635bd9bd00570e2cdf12c7515f81e5ad2700fd0841e6b725e306ad\" returns successfully" Jan 13 21:26:48.653503 systemd[1]: Started sshd@11-10.128.0.101:22-147.75.109.163:54022.service - OpenSSH per-connection server daemon (147.75.109.163:54022). Jan 13 21:26:48.946554 sshd[5276]: Accepted publickey for core from 147.75.109.163 port 54022 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:26:48.948577 sshd[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:48.955254 systemd-logind[1439]: New session 12 of user core. Jan 13 21:26:48.960976 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:26:49.236242 sshd[5276]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:49.241053 systemd[1]: sshd@11-10.128.0.101:22-147.75.109.163:54022.service: Deactivated successfully. Jan 13 21:26:49.244148 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:26:49.247767 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:26:49.249352 systemd-logind[1439]: Removed session 12. Jan 13 21:26:49.897634 kubelet[2666]: I0113 21:26:49.897115 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:26:49.930740 kubelet[2666]: I0113 21:26:49.928060 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-brgw4" podStartSLOduration=32.317373075 podStartE2EDuration="37.928027829s" podCreationTimestamp="2025-01-13 21:26:12 +0000 UTC" firstStartedPulling="2025-01-13 21:26:39.496570399 +0000 UTC m=+53.204425513" lastFinishedPulling="2025-01-13 21:26:45.107225158 +0000 UTC m=+58.815080267" observedRunningTime="2025-01-13 21:26:46.074880571 +0000 UTC m=+59.782735693" watchObservedRunningTime="2025-01-13 21:26:49.928027829 +0000 UTC m=+63.635882947" Jan 13 21:26:54.294137 systemd[1]: Started sshd@12-10.128.0.101:22-147.75.109.163:54034.service - OpenSSH per-connection server daemon (147.75.109.163:54034). Jan 13 21:26:54.587483 sshd[5319]: Accepted publickey for core from 147.75.109.163 port 54034 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:26:54.589512 sshd[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:54.596242 systemd-logind[1439]: New session 13 of user core. Jan 13 21:26:54.601931 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:26:54.879520 sshd[5319]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:54.884515 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:26:54.884935 systemd[1]: sshd@12-10.128.0.101:22-147.75.109.163:54034.service: Deactivated successfully. Jan 13 21:26:54.888616 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:26:54.891918 systemd-logind[1439]: Removed session 13. Jan 13 21:26:58.543012 kubelet[2666]: I0113 21:26:58.542342 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:26:59.935324 systemd[1]: Started sshd@13-10.128.0.101:22-147.75.109.163:45570.service - OpenSSH per-connection server daemon (147.75.109.163:45570). Jan 13 21:27:00.233571 sshd[5359]: Accepted publickey for core from 147.75.109.163 port 45570 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:00.236040 sshd[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:00.242523 systemd-logind[1439]: New session 14 of user core. Jan 13 21:27:00.247995 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:27:00.539645 sshd[5359]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:00.546726 systemd[1]: sshd@13-10.128.0.101:22-147.75.109.163:45570.service: Deactivated successfully. Jan 13 21:27:00.550518 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:27:00.552408 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:27:00.555119 systemd-logind[1439]: Removed session 14. Jan 13 21:27:05.597278 systemd[1]: Started sshd@14-10.128.0.101:22-147.75.109.163:45576.service - OpenSSH per-connection server daemon (147.75.109.163:45576). Jan 13 21:27:05.889655 sshd[5375]: Accepted publickey for core from 147.75.109.163 port 45576 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:05.892680 sshd[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:05.900684 systemd-logind[1439]: New session 15 of user core. Jan 13 21:27:05.906033 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:27:06.197928 sshd[5375]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:06.204502 systemd[1]: sshd@14-10.128.0.101:22-147.75.109.163:45576.service: Deactivated successfully. Jan 13 21:27:06.209404 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:27:06.212052 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:27:06.214322 systemd-logind[1439]: Removed session 15. Jan 13 21:27:11.254164 systemd[1]: Started sshd@15-10.128.0.101:22-147.75.109.163:45308.service - OpenSSH per-connection server daemon (147.75.109.163:45308). Jan 13 21:27:11.547190 sshd[5393]: Accepted publickey for core from 147.75.109.163 port 45308 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:11.549317 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:11.556868 systemd-logind[1439]: New session 16 of user core. Jan 13 21:27:11.562010 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:27:11.846682 sshd[5393]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:11.854144 systemd[1]: sshd@15-10.128.0.101:22-147.75.109.163:45308.service: Deactivated successfully. Jan 13 21:27:11.857913 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:27:11.859208 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:27:11.861146 systemd-logind[1439]: Removed session 16. Jan 13 21:27:16.906248 systemd[1]: Started sshd@16-10.128.0.101:22-147.75.109.163:45318.service - OpenSSH per-connection server daemon (147.75.109.163:45318). Jan 13 21:27:17.208424 sshd[5413]: Accepted publickey for core from 147.75.109.163 port 45318 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:17.210545 sshd[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:17.217658 systemd-logind[1439]: New session 17 of user core. Jan 13 21:27:17.225014 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:27:17.517290 sshd[5413]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:17.523680 systemd[1]: sshd@16-10.128.0.101:22-147.75.109.163:45318.service: Deactivated successfully. Jan 13 21:27:17.527470 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:27:17.529131 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:27:17.531484 systemd-logind[1439]: Removed session 17. Jan 13 21:27:17.575168 systemd[1]: Started sshd@17-10.128.0.101:22-147.75.109.163:45526.service - OpenSSH per-connection server daemon (147.75.109.163:45526). Jan 13 21:27:17.753121 systemd[1]: run-containerd-runc-k8s.io-c9c8f9428dde41622be3d0f626d3e5cc0afa875e642f2f63a90277287762f8a8-runc.iB2WwZ.mount: Deactivated successfully. Jan 13 21:27:17.863747 sshd[5426]: Accepted publickey for core from 147.75.109.163 port 45526 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:17.865973 sshd[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:17.871874 systemd-logind[1439]: New session 18 of user core. Jan 13 21:27:17.878948 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:27:18.189206 sshd[5426]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:18.196883 systemd[1]: sshd@17-10.128.0.101:22-147.75.109.163:45526.service: Deactivated successfully. Jan 13 21:27:18.201364 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:27:18.204251 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:27:18.206541 systemd-logind[1439]: Removed session 18. Jan 13 21:27:18.248192 systemd[1]: Started sshd@18-10.128.0.101:22-147.75.109.163:45532.service - OpenSSH per-connection server daemon (147.75.109.163:45532). Jan 13 21:27:18.532156 sshd[5456]: Accepted publickey for core from 147.75.109.163 port 45532 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:18.534143 sshd[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:18.541233 systemd-logind[1439]: New session 19 of user core. Jan 13 21:27:18.548209 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:27:18.836682 sshd[5456]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:18.844303 systemd[1]: sshd@18-10.128.0.101:22-147.75.109.163:45532.service: Deactivated successfully. Jan 13 21:27:18.848537 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:27:18.849953 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:27:18.851973 systemd-logind[1439]: Removed session 19. Jan 13 21:27:23.894353 systemd[1]: Started sshd@19-10.128.0.101:22-147.75.109.163:45534.service - OpenSSH per-connection server daemon (147.75.109.163:45534). Jan 13 21:27:24.183906 sshd[5493]: Accepted publickey for core from 147.75.109.163 port 45534 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:24.185850 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:24.192456 systemd-logind[1439]: New session 20 of user core. Jan 13 21:27:24.196970 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:27:24.476729 sshd[5493]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:24.481601 systemd[1]: sshd@19-10.128.0.101:22-147.75.109.163:45534.service: Deactivated successfully. Jan 13 21:27:24.484555 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:27:24.487010 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:27:24.488630 systemd-logind[1439]: Removed session 20. Jan 13 21:27:25.428515 systemd[1]: run-containerd-runc-k8s.io-77e9ca8c2795135a48c09766eddcdf9b5c57550c44751ba564aed59984e26804-runc.F9OQKN.mount: Deactivated successfully. Jan 13 21:27:29.535227 systemd[1]: Started sshd@20-10.128.0.101:22-147.75.109.163:59862.service - OpenSSH per-connection server daemon (147.75.109.163:59862). Jan 13 21:27:29.830022 sshd[5527]: Accepted publickey for core from 147.75.109.163 port 59862 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:29.832678 sshd[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:29.840515 systemd-logind[1439]: New session 21 of user core. Jan 13 21:27:29.845975 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:27:30.137344 sshd[5527]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:30.143314 systemd[1]: sshd@20-10.128.0.101:22-147.75.109.163:59862.service: Deactivated successfully. Jan 13 21:27:30.147527 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:27:30.150654 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:27:30.153075 systemd-logind[1439]: Removed session 21. Jan 13 21:27:35.201185 systemd[1]: Started sshd@21-10.128.0.101:22-147.75.109.163:59872.service - OpenSSH per-connection server daemon (147.75.109.163:59872). Jan 13 21:27:35.512832 sshd[5542]: Accepted publickey for core from 147.75.109.163 port 59872 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:35.515348 sshd[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:35.523558 systemd-logind[1439]: New session 22 of user core. Jan 13 21:27:35.530053 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:27:35.859186 sshd[5542]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:35.874413 systemd[1]: sshd@21-10.128.0.101:22-147.75.109.163:59872.service: Deactivated successfully. Jan 13 21:27:35.880575 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:27:35.883970 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:27:35.887050 systemd-logind[1439]: Removed session 22. Jan 13 21:27:40.917262 systemd[1]: Started sshd@22-10.128.0.101:22-147.75.109.163:57046.service - OpenSSH per-connection server daemon (147.75.109.163:57046). Jan 13 21:27:41.220841 sshd[5556]: Accepted publickey for core from 147.75.109.163 port 57046 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:41.223825 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:41.231942 systemd-logind[1439]: New session 23 of user core. Jan 13 21:27:41.238018 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:27:41.530620 sshd[5556]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:41.536300 systemd[1]: sshd@22-10.128.0.101:22-147.75.109.163:57046.service: Deactivated successfully. Jan 13 21:27:41.540845 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:27:41.543507 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:27:41.545832 systemd-logind[1439]: Removed session 23. Jan 13 21:27:41.591221 systemd[1]: Started sshd@23-10.128.0.101:22-147.75.109.163:57056.service - OpenSSH per-connection server daemon (147.75.109.163:57056). Jan 13 21:27:41.885062 sshd[5568]: Accepted publickey for core from 147.75.109.163 port 57056 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:41.887379 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:41.895916 systemd-logind[1439]: New session 24 of user core. Jan 13 21:27:41.903015 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:27:42.258913 sshd[5568]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:42.263810 systemd[1]: sshd@23-10.128.0.101:22-147.75.109.163:57056.service: Deactivated successfully. Jan 13 21:27:42.267761 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:27:42.270074 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:27:42.272472 systemd-logind[1439]: Removed session 24. Jan 13 21:27:42.314172 systemd[1]: Started sshd@24-10.128.0.101:22-147.75.109.163:57072.service - OpenSSH per-connection server daemon (147.75.109.163:57072). Jan 13 21:27:42.597542 sshd[5579]: Accepted publickey for core from 147.75.109.163 port 57072 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:42.599631 sshd[5579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:42.606317 systemd-logind[1439]: New session 25 of user core. Jan 13 21:27:42.613964 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:27:44.965102 sshd[5579]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:44.978891 systemd[1]: sshd@24-10.128.0.101:22-147.75.109.163:57072.service: Deactivated successfully. Jan 13 21:27:44.987848 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:27:44.990111 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:27:44.993511 systemd-logind[1439]: Removed session 25. Jan 13 21:27:45.022403 systemd[1]: Started sshd@25-10.128.0.101:22-147.75.109.163:57088.service - OpenSSH per-connection server daemon (147.75.109.163:57088). Jan 13 21:27:45.331524 sshd[5596]: Accepted publickey for core from 147.75.109.163 port 57088 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:45.335349 sshd[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:45.344672 systemd-logind[1439]: New session 26 of user core. Jan 13 21:27:45.352157 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:27:45.818609 sshd[5596]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:45.826625 systemd[1]: sshd@25-10.128.0.101:22-147.75.109.163:57088.service: Deactivated successfully. Jan 13 21:27:45.830603 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:27:45.832594 systemd-logind[1439]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:27:45.834534 systemd-logind[1439]: Removed session 26. Jan 13 21:27:45.876806 systemd[1]: Started sshd@26-10.128.0.101:22-147.75.109.163:57100.service - OpenSSH per-connection server daemon (147.75.109.163:57100). Jan 13 21:27:46.170551 sshd[5609]: Accepted publickey for core from 147.75.109.163 port 57100 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:46.174203 sshd[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:46.185884 systemd-logind[1439]: New session 27 of user core. Jan 13 21:27:46.192039 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:27:46.471269 sshd[5609]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:46.477732 systemd[1]: sshd@26-10.128.0.101:22-147.75.109.163:57100.service: Deactivated successfully. Jan 13 21:27:46.481108 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:27:46.484497 systemd-logind[1439]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:27:46.487147 systemd-logind[1439]: Removed session 27. Jan 13 21:27:51.530162 systemd[1]: Started sshd@27-10.128.0.101:22-147.75.109.163:35912.service - OpenSSH per-connection server daemon (147.75.109.163:35912). Jan 13 21:27:51.819026 sshd[5628]: Accepted publickey for core from 147.75.109.163 port 35912 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:51.820966 sshd[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:51.827939 systemd-logind[1439]: New session 28 of user core. Jan 13 21:27:51.832995 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:27:52.121020 sshd[5628]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:52.128308 systemd[1]: sshd@27-10.128.0.101:22-147.75.109.163:35912.service: Deactivated successfully. Jan 13 21:27:52.131941 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:27:52.134263 systemd-logind[1439]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:27:52.136518 systemd-logind[1439]: Removed session 28. Jan 13 21:27:57.177804 systemd[1]: Started sshd@28-10.128.0.101:22-147.75.109.163:35918.service - OpenSSH per-connection server daemon (147.75.109.163:35918). Jan 13 21:27:57.475855 sshd[5688]: Accepted publickey for core from 147.75.109.163 port 35918 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:27:57.478540 sshd[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:57.485458 systemd-logind[1439]: New session 29 of user core. Jan 13 21:27:57.489950 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 21:27:57.767640 sshd[5688]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:57.773470 systemd[1]: sshd@28-10.128.0.101:22-147.75.109.163:35918.service: Deactivated successfully. Jan 13 21:27:57.776646 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 21:27:57.778173 systemd-logind[1439]: Session 29 logged out. Waiting for processes to exit. Jan 13 21:27:57.779814 systemd-logind[1439]: Removed session 29. Jan 13 21:28:02.827858 systemd[1]: Started sshd@29-10.128.0.101:22-147.75.109.163:52924.service - OpenSSH per-connection server daemon (147.75.109.163:52924). Jan 13 21:28:03.141476 sshd[5716]: Accepted publickey for core from 147.75.109.163 port 52924 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:28:03.144473 sshd[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:03.156510 systemd-logind[1439]: New session 30 of user core. Jan 13 21:28:03.163185 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 21:28:03.480153 sshd[5716]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:03.489148 systemd-logind[1439]: Session 30 logged out. Waiting for processes to exit. Jan 13 21:28:03.491310 systemd[1]: sshd@29-10.128.0.101:22-147.75.109.163:52924.service: Deactivated successfully. Jan 13 21:28:03.496849 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 21:28:03.504806 systemd-logind[1439]: Removed session 30.