Jan 13 21:21:47.074697 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:21:47.074744 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:47.074766 kernel: BIOS-provided physical RAM map: Jan 13 21:21:47.074783 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 21:21:47.074799 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 21:21:47.074815 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 21:21:47.074832 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 21:21:47.074854 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 21:21:47.074871 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 13 21:21:47.074888 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 13 21:21:47.074905 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 13 21:21:47.074921 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 13 21:21:47.074939 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 21:21:47.074955 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 21:21:47.075010 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 21:21:47.075028 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 21:21:47.075044 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 21:21:47.075060 kernel: NX (Execute Disable) protection: active Jan 13 21:21:47.075078 kernel: APIC: Static calls initialized Jan 13 21:21:47.075097 kernel: efi: EFI v2.7 by EDK II Jan 13 21:21:47.075115 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 13 21:21:47.075133 kernel: SMBIOS 2.4 present. Jan 13 21:21:47.075150 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 21:21:47.075168 kernel: Hypervisor detected: KVM Jan 13 21:21:47.075189 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:21:47.075206 kernel: kvm-clock: using sched offset of 11840249700 cycles Jan 13 21:21:47.075225 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:21:47.075243 kernel: tsc: Detected 2299.998 MHz processor Jan 13 21:21:47.075260 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:21:47.075279 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:21:47.075298 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 21:21:47.075316 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 21:21:47.075334 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:21:47.075358 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 21:21:47.075376 kernel: Using GB pages for direct mapping Jan 13 21:21:47.075394 kernel: Secure boot disabled Jan 13 21:21:47.075412 kernel: ACPI: Early table checksum verification disabled Jan 13 21:21:47.075430 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 21:21:47.075448 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 21:21:47.075467 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 21:21:47.075502 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 21:21:47.075526 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 21:21:47.075546 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 21:21:47.075566 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 21:21:47.075585 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 21:21:47.075604 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 21:21:47.075624 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 21:21:47.075648 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 21:21:47.075668 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 21:21:47.075688 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 21:21:47.075706 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 21:21:47.075722 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 21:21:47.075739 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 21:21:47.075759 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 21:21:47.075778 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 21:21:47.075797 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 21:21:47.075823 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 21:21:47.075842 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:21:47.075860 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:21:47.075879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:21:47.075899 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 21:21:47.075918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 21:21:47.075938 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 21:21:47.075956 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 21:21:47.075994 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 13 21:21:47.076020 kernel: Zone ranges: Jan 13 21:21:47.076039 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:21:47.076058 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:21:47.076077 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:21:47.076106 kernel: Movable zone start for each node Jan 13 21:21:47.076141 kernel: Early memory node ranges Jan 13 21:21:47.076161 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 21:21:47.076179 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 21:21:47.076198 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 13 21:21:47.076223 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 21:21:47.076242 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:21:47.076260 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 21:21:47.076278 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:21:47.076297 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 21:21:47.076315 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 21:21:47.076335 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 21:21:47.076355 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 21:21:47.076375 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:21:47.076400 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:21:47.076419 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:21:47.076439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:21:47.076459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:21:47.076478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:21:47.076505 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:21:47.076526 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:21:47.076545 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:21:47.076565 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:21:47.076590 kernel: Booting paravirtualized kernel on KVM Jan 13 21:21:47.076611 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:21:47.076630 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:21:47.076650 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:21:47.076669 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:21:47.076689 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:21:47.076708 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:21:47.076728 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:21:47.076750 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:47.076776 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:21:47.076796 kernel: random: crng init done Jan 13 21:21:47.076816 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 21:21:47.076837 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:21:47.076857 kernel: Fallback order for Node 0: 0 Jan 13 21:21:47.076877 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 13 21:21:47.076898 kernel: Policy zone: Normal Jan 13 21:21:47.076918 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:21:47.076943 kernel: software IO TLB: area num 2. Jan 13 21:21:47.076964 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Jan 13 21:21:47.077003 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:21:47.077024 kernel: Kernel/User page tables isolation: enabled Jan 13 21:21:47.077044 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:21:47.077065 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:21:47.077085 kernel: Dynamic Preempt: voluntary Jan 13 21:21:47.077105 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:21:47.077124 kernel: rcu: RCU event tracing is enabled. Jan 13 21:21:47.077165 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:21:47.077187 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:21:47.077209 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:21:47.077234 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:21:47.077254 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:21:47.077274 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:21:47.077295 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:21:47.077314 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:21:47.077335 kernel: Console: colour dummy device 80x25 Jan 13 21:21:47.077362 kernel: printk: console [ttyS0] enabled Jan 13 21:21:47.077382 kernel: ACPI: Core revision 20230628 Jan 13 21:21:47.077403 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:21:47.077424 kernel: x2apic enabled Jan 13 21:21:47.077443 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:21:47.077464 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 21:21:47.077486 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:21:47.077516 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 21:21:47.077541 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 21:21:47.077561 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 21:21:47.077581 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:21:47.077600 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:21:47.077619 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:21:47.077640 kernel: Spectre V2 : Mitigation: IBRS Jan 13 21:21:47.077660 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:21:47.077681 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:21:47.077702 kernel: RETBleed: Mitigation: IBRS Jan 13 21:21:47.077728 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:21:47.077746 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 21:21:47.077765 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:21:47.077787 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 21:21:47.077807 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:21:47.077829 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:21:47.077851 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:21:47.077872 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:21:47.077894 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:21:47.077920 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 21:21:47.077941 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:21:47.077962 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:21:47.078009 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:21:47.078031 kernel: landlock: Up and running. Jan 13 21:21:47.078053 kernel: SELinux: Initializing. Jan 13 21:21:47.078074 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:21:47.078096 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:21:47.078118 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 21:21:47.078145 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:47.078166 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:47.078188 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:47.078209 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 21:21:47.078231 kernel: signal: max sigframe size: 1776 Jan 13 21:21:47.078251 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:21:47.078274 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:21:47.078295 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:21:47.078317 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:21:47.078343 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:21:47.078364 kernel: .... node #0, CPUs: #1 Jan 13 21:21:47.078385 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:21:47.078408 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:21:47.078429 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:21:47.078450 kernel: smpboot: Max logical packages: 1 Jan 13 21:21:47.078472 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 21:21:47.078494 kernel: devtmpfs: initialized Jan 13 21:21:47.078528 kernel: x86/mm: Memory block size: 128MB Jan 13 21:21:47.078548 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 21:21:47.078571 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:21:47.078592 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:21:47.078613 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:21:47.078635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:21:47.078656 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:21:47.078679 kernel: audit: type=2000 audit(1736803305.528:1): state=initialized audit_enabled=0 res=1 Jan 13 21:21:47.078699 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:21:47.078726 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:21:47.078747 kernel: cpuidle: using governor menu Jan 13 21:21:47.078769 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:21:47.078790 kernel: dca service started, version 1.12.1 Jan 13 21:21:47.078812 kernel: PCI: Using configuration type 1 for base access Jan 13 21:21:47.078833 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:21:47.078855 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:21:47.078876 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:21:47.078898 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:21:47.078924 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:21:47.078946 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:21:47.078980 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:21:47.079002 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:21:47.079024 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:21:47.079044 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:21:47.079064 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:21:47.079085 kernel: ACPI: Interpreter enabled Jan 13 21:21:47.079105 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:21:47.079131 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:21:47.079153 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:21:47.079174 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 21:21:47.079194 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:21:47.079216 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:21:47.079518 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:21:47.079763 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:21:47.080019 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:21:47.080052 kernel: PCI host bridge to bus 0000:00 Jan 13 21:21:47.080285 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:21:47.080504 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:21:47.080711 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:21:47.080917 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 21:21:47.081139 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:21:47.081386 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:21:47.081639 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 21:21:47.081878 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:21:47.082136 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:21:47.082380 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 21:21:47.082626 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 21:21:47.082864 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 21:21:47.083125 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:21:47.083348 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 21:21:47.083576 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 21:21:47.083834 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:21:47.084111 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 21:21:47.084346 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 21:21:47.084380 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:21:47.084403 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:21:47.084425 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:21:47.084445 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:21:47.084466 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:21:47.084489 kernel: iommu: Default domain type: Translated Jan 13 21:21:47.084518 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:21:47.084540 kernel: efivars: Registered efivars operations Jan 13 21:21:47.084560 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:21:47.084584 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:21:47.084606 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 21:21:47.084627 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 21:21:47.084648 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 21:21:47.084669 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 21:21:47.084690 kernel: vgaarb: loaded Jan 13 21:21:47.084712 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:21:47.084732 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:21:47.084754 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:21:47.084781 kernel: pnp: PnP ACPI init Jan 13 21:21:47.084800 kernel: pnp: PnP ACPI: found 7 devices Jan 13 21:21:47.084820 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:21:47.084841 kernel: NET: Registered PF_INET protocol family Jan 13 21:21:47.084859 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:21:47.084882 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 21:21:47.084908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:21:47.084934 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:21:47.084955 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 21:21:47.085030 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 21:21:47.085052 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:21:47.085073 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:21:47.085094 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:21:47.085115 kernel: NET: Registered PF_XDP protocol family Jan 13 21:21:47.085352 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:21:47.085565 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:21:47.085766 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:21:47.085997 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 21:21:47.086229 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:21:47.086256 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:21:47.086277 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:21:47.086298 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 21:21:47.086320 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:21:47.086341 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:21:47.086362 kernel: clocksource: Switched to clocksource tsc Jan 13 21:21:47.086390 kernel: Initialise system trusted keyrings Jan 13 21:21:47.086410 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 21:21:47.086431 kernel: Key type asymmetric registered Jan 13 21:21:47.086452 kernel: Asymmetric key parser 'x509' registered Jan 13 21:21:47.086472 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:21:47.086518 kernel: io scheduler mq-deadline registered Jan 13 21:21:47.086540 kernel: io scheduler kyber registered Jan 13 21:21:47.086561 kernel: io scheduler bfq registered Jan 13 21:21:47.086582 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:21:47.086609 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:21:47.086836 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 21:21:47.086862 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 21:21:47.087129 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 21:21:47.087155 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:21:47.087377 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 21:21:47.087402 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:21:47.087424 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:21:47.087445 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 21:21:47.087473 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 21:21:47.087503 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 21:21:47.087740 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 21:21:47.087767 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:21:47.087788 kernel: i8042: Warning: Keylock active Jan 13 21:21:47.087809 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:21:47.087830 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:21:47.088079 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:21:47.088299 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:21:47.088514 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:21:46 UTC (1736803306) Jan 13 21:21:47.088723 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:21:47.088748 kernel: intel_pstate: CPU model not supported Jan 13 21:21:47.088769 kernel: pstore: Using crash dump compression: deflate Jan 13 21:21:47.088790 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:21:47.088811 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:21:47.088832 kernel: Segment Routing with IPv6 Jan 13 21:21:47.088859 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:21:47.088880 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:21:47.088902 kernel: Key type dns_resolver registered Jan 13 21:21:47.088922 kernel: IPI shorthand broadcast: enabled Jan 13 21:21:47.088943 kernel: sched_clock: Marking stable (798004017, 163521727)->(980597555, -19071811) Jan 13 21:21:47.088989 kernel: registered taskstats version 1 Jan 13 21:21:47.089011 kernel: Loading compiled-in X.509 certificates Jan 13 21:21:47.089032 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:21:47.089052 kernel: Key type .fscrypt registered Jan 13 21:21:47.089076 kernel: Key type fscrypt-provisioning registered Jan 13 21:21:47.089096 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:21:47.089117 kernel: ima: No architecture policies found Jan 13 21:21:47.089139 kernel: clk: Disabling unused clocks Jan 13 21:21:47.089159 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:21:47.089178 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:21:47.089198 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:21:47.089220 kernel: Run /init as init process Jan 13 21:21:47.089246 kernel: with arguments: Jan 13 21:21:47.089266 kernel: /init Jan 13 21:21:47.089287 kernel: with environment: Jan 13 21:21:47.089307 kernel: HOME=/ Jan 13 21:21:47.089327 kernel: TERM=linux Jan 13 21:21:47.089349 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:21:47.089371 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 13 21:21:47.089396 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:21:47.089425 systemd[1]: Detected virtualization google. Jan 13 21:21:47.089446 systemd[1]: Detected architecture x86-64. Jan 13 21:21:47.089467 systemd[1]: Running in initrd. Jan 13 21:21:47.089486 systemd[1]: No hostname configured, using default hostname. Jan 13 21:21:47.089513 systemd[1]: Hostname set to . Jan 13 21:21:47.089534 systemd[1]: Initializing machine ID from random generator. Jan 13 21:21:47.089552 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:21:47.089573 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:47.089601 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:47.089624 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:21:47.089646 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:21:47.089668 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:21:47.089689 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:21:47.089711 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:21:47.089731 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:21:47.089758 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:47.089780 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:47.089825 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:21:47.089852 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:21:47.089874 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:21:47.089896 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:21:47.089923 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:21:47.089943 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:21:47.089990 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:21:47.090013 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:21:47.090036 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:47.090057 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:47.090080 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:47.090103 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:21:47.090131 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:21:47.090153 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:21:47.090175 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:21:47.090197 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:21:47.090220 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:21:47.090242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:21:47.090264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:47.090324 systemd-journald[183]: Collecting audit messages is disabled. Jan 13 21:21:47.090374 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:21:47.090398 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:47.090419 systemd-journald[183]: Journal started Jan 13 21:21:47.090465 systemd-journald[183]: Runtime Journal (/run/log/journal/2865670be8494c569d91bad95ecb625a) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:21:47.094803 systemd-modules-load[184]: Inserted module 'overlay' Jan 13 21:21:47.097262 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:21:47.104103 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:21:47.121174 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:21:47.132960 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:21:47.140481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:47.146558 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:21:47.146595 kernel: Bridge firewalling registered Jan 13 21:21:47.145713 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 13 21:21:47.150468 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:47.157263 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:47.171280 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:47.174603 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:21:47.183849 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:21:47.187034 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:47.205230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:47.209187 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:21:47.212442 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:47.225221 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:47.236195 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:21:47.264178 dracut-cmdline[217]: dracut-dracut-053 Jan 13 21:21:47.267330 systemd-resolved[213]: Positive Trust Anchors: Jan 13 21:21:47.267350 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:21:47.276800 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:47.267422 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:21:47.273415 systemd-resolved[213]: Defaulting to hostname 'linux'. Jan 13 21:21:47.275503 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:21:47.280207 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:47.372013 kernel: SCSI subsystem initialized Jan 13 21:21:47.383011 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:21:47.394001 kernel: iscsi: registered transport (tcp) Jan 13 21:21:47.417013 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:21:47.417084 kernel: QLogic iSCSI HBA Driver Jan 13 21:21:47.473305 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:21:47.480205 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:21:47.520189 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:21:47.520266 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:21:47.520298 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:21:47.566036 kernel: raid6: avx2x4 gen() 17805 MB/s Jan 13 21:21:47.583014 kernel: raid6: avx2x2 gen() 17811 MB/s Jan 13 21:21:47.600357 kernel: raid6: avx2x1 gen() 13587 MB/s Jan 13 21:21:47.600409 kernel: raid6: using algorithm avx2x2 gen() 17811 MB/s Jan 13 21:21:47.618366 kernel: raid6: .... xor() 17572 MB/s, rmw enabled Jan 13 21:21:47.618442 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:21:47.641008 kernel: xor: automatically using best checksumming function avx Jan 13 21:21:47.813012 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:21:47.827734 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:21:47.835263 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:47.856998 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 13 21:21:47.864392 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:47.874395 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:21:47.905261 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 13 21:21:47.943434 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:21:47.950235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:21:48.051023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:48.061223 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:21:48.099517 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:21:48.110598 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:21:48.119091 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:48.124347 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:21:48.135183 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:21:48.174611 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:21:48.197992 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:21:48.199002 kernel: scsi host0: Virtio SCSI HBA Jan 13 21:21:48.209027 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 21:21:48.214557 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:21:48.215834 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:48.224235 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:48.225755 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:21:48.226236 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:48.232254 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:48.244657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:48.267523 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:21:48.267562 kernel: AES CTR mode by8 optimization enabled Jan 13 21:21:48.306536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:48.317191 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:48.323100 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 21:21:48.338138 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 21:21:48.338396 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 21:21:48.338642 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 21:21:48.338868 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:21:48.339119 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:21:48.339148 kernel: GPT:17805311 != 25165823 Jan 13 21:21:48.339171 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:21:48.339201 kernel: GPT:17805311 != 25165823 Jan 13 21:21:48.339224 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:21:48.339248 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:48.339272 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 21:21:48.360153 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:48.399759 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (454) Jan 13 21:21:48.401342 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 21:21:48.404105 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (453) Jan 13 21:21:48.431324 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 21:21:48.437578 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 21:21:48.437794 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 21:21:48.453734 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:21:48.467180 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:21:48.479311 disk-uuid[550]: Primary Header is updated. Jan 13 21:21:48.479311 disk-uuid[550]: Secondary Entries is updated. Jan 13 21:21:48.479311 disk-uuid[550]: Secondary Header is updated. Jan 13 21:21:48.490998 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:48.498602 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:48.518989 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:49.527999 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:49.528074 disk-uuid[551]: The operation has completed successfully. Jan 13 21:21:49.592449 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:21:49.592592 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:21:49.631180 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:21:49.661781 sh[568]: Success Jan 13 21:21:49.685189 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:21:49.761297 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:21:49.768730 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:21:49.795418 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:21:49.835884 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:21:49.835986 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:49.836015 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:21:49.845321 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:21:49.852138 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:21:49.883041 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:21:49.889333 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:21:49.890245 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:21:49.895305 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:21:49.942884 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:49.942938 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:49.949984 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:49.967887 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:49.967938 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:49.977180 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:21:50.011122 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:50.004402 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:21:50.019197 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:21:50.149890 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:21:50.178349 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:21:50.223345 ignition[635]: Ignition 2.19.0 Jan 13 21:21:50.223787 ignition[635]: Stage: fetch-offline Jan 13 21:21:50.226192 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:21:50.223869 ignition[635]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:50.239746 systemd-networkd[752]: lo: Link UP Jan 13 21:21:50.223890 ignition[635]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:50.239751 systemd-networkd[752]: lo: Gained carrier Jan 13 21:21:50.224150 ignition[635]: parsed url from cmdline: "" Jan 13 21:21:50.241658 systemd-networkd[752]: Enumeration completed Jan 13 21:21:50.224159 ignition[635]: no config URL provided Jan 13 21:21:50.242194 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:50.224171 ignition[635]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:21:50.242202 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:21:50.224189 ignition[635]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:21:50.244343 systemd-networkd[752]: eth0: Link UP Jan 13 21:21:50.224202 ignition[635]: failed to fetch config: resource requires networking Jan 13 21:21:50.244351 systemd-networkd[752]: eth0: Gained carrier Jan 13 21:21:50.224697 ignition[635]: Ignition finished successfully Jan 13 21:21:50.244365 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:50.307878 ignition[761]: Ignition 2.19.0 Jan 13 21:21:50.255047 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.49/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:21:50.307888 ignition[761]: Stage: fetch Jan 13 21:21:50.256634 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:21:50.308160 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:50.266790 systemd[1]: Reached target network.target - Network. Jan 13 21:21:50.308178 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:50.288156 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:21:50.308325 ignition[761]: parsed url from cmdline: "" Jan 13 21:21:50.317121 unknown[761]: fetched base config from "system" Jan 13 21:21:50.308333 ignition[761]: no config URL provided Jan 13 21:21:50.317132 unknown[761]: fetched base config from "system" Jan 13 21:21:50.308342 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:21:50.317141 unknown[761]: fetched user config from "gcp" Jan 13 21:21:50.308356 ignition[761]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:21:50.320448 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:21:50.308383 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 21:21:50.344196 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:21:50.311569 ignition[761]: GET result: OK Jan 13 21:21:50.386408 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:21:50.311669 ignition[761]: parsing config with SHA512: 7a779835f26662e2dd95e316bc4bbac804d48343e13890cd9a4e52d02b1a1813b5c28e152a07c56a227529eed7d4c74d53cf420e12973a14045f6395c0a4f6a0 Jan 13 21:21:50.402128 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:21:50.317638 ignition[761]: fetch: fetch complete Jan 13 21:21:50.459528 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:21:50.317645 ignition[761]: fetch: fetch passed Jan 13 21:21:50.467633 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:21:50.317691 ignition[761]: Ignition finished successfully Jan 13 21:21:50.487272 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:21:50.380193 ignition[767]: Ignition 2.19.0 Jan 13 21:21:50.501272 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:21:50.380203 ignition[767]: Stage: kargs Jan 13 21:21:50.520308 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:21:50.380388 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:50.535273 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:21:50.380400 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:50.558137 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:21:50.381382 ignition[767]: kargs: kargs passed Jan 13 21:21:50.381435 ignition[767]: Ignition finished successfully Jan 13 21:21:50.429090 ignition[772]: Ignition 2.19.0 Jan 13 21:21:50.429101 ignition[772]: Stage: disks Jan 13 21:21:50.429293 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:50.429305 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:50.430322 ignition[772]: disks: disks passed Jan 13 21:21:50.430381 ignition[772]: Ignition finished successfully Jan 13 21:21:50.614017 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:21:50.809098 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:21:50.814126 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:21:50.969010 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:21:50.969450 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:21:50.970284 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:21:51.000084 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:21:51.012180 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:21:51.034475 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:21:51.101258 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (789) Jan 13 21:21:51.101295 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:51.101312 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:51.101326 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:51.101341 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:51.101366 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:51.034555 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:21:51.034595 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:21:51.036518 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:21:51.110586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:21:51.142202 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:21:51.265248 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:21:51.276116 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:21:51.286115 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:21:51.296076 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:21:51.419403 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:21:51.425222 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:21:51.450346 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:21:51.479125 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:51.459195 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:21:51.504361 ignition[905]: INFO : Ignition 2.19.0 Jan 13 21:21:51.504361 ignition[905]: INFO : Stage: mount Jan 13 21:21:51.519105 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:51.519105 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:51.519105 ignition[905]: INFO : mount: mount passed Jan 13 21:21:51.519105 ignition[905]: INFO : Ignition finished successfully Jan 13 21:21:51.507552 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:21:51.529554 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:21:51.563092 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:21:51.798186 systemd-networkd[752]: eth0: Gained IPv6LL Jan 13 21:21:51.982251 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:21:52.030127 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Jan 13 21:21:52.030168 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:52.030185 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:52.030202 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:52.043056 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:52.043134 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:52.046183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:21:52.081409 ignition[934]: INFO : Ignition 2.19.0 Jan 13 21:21:52.081409 ignition[934]: INFO : Stage: files Jan 13 21:21:52.096103 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:52.096103 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:52.096103 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:21:52.096103 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:21:52.096103 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:21:52.096103 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:21:52.096103 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:21:52.096103 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:21:52.094923 unknown[934]: wrote ssh authorized keys file for user: core Jan 13 21:21:52.196073 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:52.196073 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:21:53.225232 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:21:53.398294 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:22:23.401953 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET error: Get "https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw": dial tcp 140.82.112.4:443: i/o timeout Jan 13 21:22:23.602422 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #2 Jan 13 21:22:23.891449 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:22:24.397469 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:22:24.397469 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:22:24.438107 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:22:24.438107 ignition[934]: INFO : files: files passed Jan 13 21:22:24.438107 ignition[934]: INFO : Ignition finished successfully Jan 13 21:22:24.402818 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:22:24.434261 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:22:24.461136 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:22:24.514395 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:22:24.652105 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:24.652105 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:24.514511 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:22:24.721085 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:24.537369 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:22:24.552469 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:22:24.582157 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:22:24.649470 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:22:24.649590 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:22:24.663327 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:22:24.678232 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:22:24.710260 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:22:24.716154 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:22:24.788696 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:22:24.815205 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:22:24.832742 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:22:24.847230 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:22:24.870298 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:22:24.889248 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:22:24.889431 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:22:24.922276 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:22:24.940231 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:22:24.958326 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:22:24.976285 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:22:24.995282 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:22:25.018286 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:22:25.038279 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:22:25.057302 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:22:25.077269 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:22:25.097260 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:22:25.115222 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:22:25.115423 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:22:25.146346 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:22:25.166246 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:22:25.187217 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:22:25.187394 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:22:25.205280 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:22:25.205497 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:22:25.233280 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:22:25.329075 ignition[987]: INFO : Ignition 2.19.0 Jan 13 21:22:25.329075 ignition[987]: INFO : Stage: umount Jan 13 21:22:25.329075 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:25.329075 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:22:25.329075 ignition[987]: INFO : umount: umount passed Jan 13 21:22:25.329075 ignition[987]: INFO : Ignition finished successfully Jan 13 21:22:25.233511 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:22:25.254277 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:22:25.254465 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:22:25.272296 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:22:25.323271 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:22:25.337108 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:22:25.337429 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:22:25.385494 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:22:25.385664 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:22:25.427036 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:22:25.428202 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:22:25.428336 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:22:25.434775 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:22:25.434885 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:22:25.460737 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:22:25.460854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:22:25.486183 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:22:25.486261 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:22:25.506169 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:22:25.506255 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:22:25.526166 systemd[1]: Stopped target network.target - Network. Jan 13 21:22:25.543111 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:22:25.543223 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:22:25.560152 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:22:25.575098 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:22:25.575183 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:22:25.593203 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:22:25.609079 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:22:25.627144 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:22:25.627219 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:22:25.645145 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:22:25.645219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:22:25.663120 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:22:25.663214 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:22:25.681128 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:22:25.681209 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:22:25.699356 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:22:25.706035 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 13 21:22:25.715416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:22:25.730641 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:22:25.730779 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:22:25.749040 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:22:25.749232 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:22:25.765697 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:22:25.765840 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:22:25.784246 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:22:25.784313 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:22:25.799281 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:22:25.799339 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:22:25.828072 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:22:25.838219 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:22:25.838284 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:22:25.879178 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:22:25.879267 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:22:25.897127 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:22:25.897207 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:22:25.917131 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:22:25.917213 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:22:25.936302 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:22:26.341082 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 13 21:22:25.950501 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:22:25.950748 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:22:25.974509 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:22:25.974646 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:22:25.994138 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:22:25.994199 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:22:26.014104 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:22:26.014180 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:22:26.041072 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:22:26.041154 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:22:26.068077 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:22:26.068183 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:22:26.102164 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:22:26.104229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:22:26.104290 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:22:26.140307 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:22:26.140364 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:22:26.169246 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:22:26.169307 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:22:26.179287 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:22:26.179342 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:26.197735 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:22:26.197856 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:22:26.215699 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:22:26.215823 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:22:26.235329 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:22:26.257166 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:22:26.291942 systemd[1]: Switching root. Jan 13 21:22:26.499681 systemd-journald[183]: Journal stopped Jan 13 21:21:47.074697 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:21:47.074744 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:47.074766 kernel: BIOS-provided physical RAM map: Jan 13 21:21:47.074783 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 21:21:47.074799 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 21:21:47.074815 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 21:21:47.074832 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 21:21:47.074854 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 21:21:47.074871 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 13 21:21:47.074888 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 13 21:21:47.074905 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 13 21:21:47.074921 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 13 21:21:47.074939 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 21:21:47.074955 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 21:21:47.075010 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 21:21:47.075028 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 21:21:47.075044 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 21:21:47.075060 kernel: NX (Execute Disable) protection: active Jan 13 21:21:47.075078 kernel: APIC: Static calls initialized Jan 13 21:21:47.075097 kernel: efi: EFI v2.7 by EDK II Jan 13 21:21:47.075115 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 13 21:21:47.075133 kernel: SMBIOS 2.4 present. Jan 13 21:21:47.075150 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 21:21:47.075168 kernel: Hypervisor detected: KVM Jan 13 21:21:47.075189 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:21:47.075206 kernel: kvm-clock: using sched offset of 11840249700 cycles Jan 13 21:21:47.075225 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:21:47.075243 kernel: tsc: Detected 2299.998 MHz processor Jan 13 21:21:47.075260 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:21:47.075279 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:21:47.075298 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 21:21:47.075316 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 21:21:47.075334 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:21:47.075358 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 21:21:47.075376 kernel: Using GB pages for direct mapping Jan 13 21:21:47.075394 kernel: Secure boot disabled Jan 13 21:21:47.075412 kernel: ACPI: Early table checksum verification disabled Jan 13 21:21:47.075430 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 21:21:47.075448 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 21:21:47.075467 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 21:21:47.075502 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 21:21:47.075526 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 21:21:47.075546 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 21:21:47.075566 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 21:21:47.075585 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 21:21:47.075604 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 21:21:47.075624 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 21:21:47.075648 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 21:21:47.075668 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 21:21:47.075688 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 21:21:47.075706 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 21:21:47.075722 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 21:21:47.075739 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 21:21:47.075759 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 21:21:47.075778 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 21:21:47.075797 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 21:21:47.075823 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 21:21:47.075842 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:21:47.075860 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:21:47.075879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:21:47.075899 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 21:21:47.075918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 21:21:47.075938 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 21:21:47.075956 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 21:21:47.075994 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 13 21:21:47.076020 kernel: Zone ranges: Jan 13 21:21:47.076039 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:21:47.076058 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:21:47.076077 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:21:47.076106 kernel: Movable zone start for each node Jan 13 21:21:47.076141 kernel: Early memory node ranges Jan 13 21:21:47.076161 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 21:21:47.076179 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 21:21:47.076198 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 13 21:21:47.076223 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 21:21:47.076242 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:21:47.076260 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 21:21:47.076278 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:21:47.076297 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 21:21:47.076315 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 21:21:47.076335 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 21:21:47.076355 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 21:21:47.076375 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:21:47.076400 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:21:47.076419 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:21:47.076439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:21:47.076459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:21:47.076478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:21:47.076505 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:21:47.076526 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:21:47.076545 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:21:47.076565 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:21:47.076590 kernel: Booting paravirtualized kernel on KVM Jan 13 21:21:47.076611 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:21:47.076630 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:21:47.076650 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:21:47.076669 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:21:47.076689 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:21:47.076708 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:21:47.076728 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:21:47.076750 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:47.076776 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:21:47.076796 kernel: random: crng init done Jan 13 21:21:47.076816 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 21:21:47.076837 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:21:47.076857 kernel: Fallback order for Node 0: 0 Jan 13 21:21:47.076877 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 13 21:21:47.076898 kernel: Policy zone: Normal Jan 13 21:21:47.076918 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:21:47.076943 kernel: software IO TLB: area num 2. Jan 13 21:21:47.076964 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Jan 13 21:21:47.077003 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:21:47.077024 kernel: Kernel/User page tables isolation: enabled Jan 13 21:21:47.077044 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:21:47.077065 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:21:47.077085 kernel: Dynamic Preempt: voluntary Jan 13 21:21:47.077105 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:21:47.077124 kernel: rcu: RCU event tracing is enabled. Jan 13 21:21:47.077165 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:21:47.077187 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:21:47.077209 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:21:47.077234 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:21:47.077254 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:21:47.077274 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:21:47.077295 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:21:47.077314 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:21:47.077335 kernel: Console: colour dummy device 80x25 Jan 13 21:21:47.077362 kernel: printk: console [ttyS0] enabled Jan 13 21:21:47.077382 kernel: ACPI: Core revision 20230628 Jan 13 21:21:47.077403 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:21:47.077424 kernel: x2apic enabled Jan 13 21:21:47.077443 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:21:47.077464 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 21:21:47.077486 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:21:47.077516 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 21:21:47.077541 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 21:21:47.077561 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 21:21:47.077581 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:21:47.077600 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:21:47.077619 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:21:47.077640 kernel: Spectre V2 : Mitigation: IBRS Jan 13 21:21:47.077660 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:21:47.077681 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:21:47.077702 kernel: RETBleed: Mitigation: IBRS Jan 13 21:21:47.077728 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:21:47.077746 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 21:21:47.077765 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:21:47.077787 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 21:21:47.077807 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:21:47.077829 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:21:47.077851 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:21:47.077872 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:21:47.077894 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:21:47.077920 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 21:21:47.077941 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:21:47.077962 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:21:47.078009 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:21:47.078031 kernel: landlock: Up and running. Jan 13 21:21:47.078053 kernel: SELinux: Initializing. Jan 13 21:21:47.078074 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:21:47.078096 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:21:47.078118 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 21:21:47.078145 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:47.078166 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:47.078188 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:47.078209 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 21:21:47.078231 kernel: signal: max sigframe size: 1776 Jan 13 21:21:47.078251 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:21:47.078274 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:21:47.078295 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:21:47.078317 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:21:47.078343 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:21:47.078364 kernel: .... node #0, CPUs: #1 Jan 13 21:21:47.078385 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:21:47.078408 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:21:47.078429 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:21:47.078450 kernel: smpboot: Max logical packages: 1 Jan 13 21:21:47.078472 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 21:21:47.078494 kernel: devtmpfs: initialized Jan 13 21:21:47.078528 kernel: x86/mm: Memory block size: 128MB Jan 13 21:21:47.078548 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 21:21:47.078571 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:21:47.078592 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:21:47.078613 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:21:47.078635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:21:47.078656 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:21:47.078679 kernel: audit: type=2000 audit(1736803305.528:1): state=initialized audit_enabled=0 res=1 Jan 13 21:21:47.078699 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:21:47.078726 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:21:47.078747 kernel: cpuidle: using governor menu Jan 13 21:21:47.078769 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:21:47.078790 kernel: dca service started, version 1.12.1 Jan 13 21:21:47.078812 kernel: PCI: Using configuration type 1 for base access Jan 13 21:21:47.078833 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:21:47.078855 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:21:47.078876 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:21:47.078898 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:21:47.078924 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:21:47.078946 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:21:47.078980 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:21:47.079002 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:21:47.079024 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:21:47.079044 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:21:47.079064 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:21:47.079085 kernel: ACPI: Interpreter enabled Jan 13 21:21:47.079105 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:21:47.079131 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:21:47.079153 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:21:47.079174 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 21:21:47.079194 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:21:47.079216 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:21:47.079518 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:21:47.079763 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:21:47.080019 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:21:47.080052 kernel: PCI host bridge to bus 0000:00 Jan 13 21:21:47.080285 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:21:47.080504 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:21:47.080711 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:21:47.080917 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 21:21:47.081139 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:21:47.081386 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:21:47.081639 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 21:21:47.081878 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:21:47.082136 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:21:47.082380 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 21:21:47.082626 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 21:21:47.082864 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 21:21:47.083125 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:21:47.083348 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 21:21:47.083576 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 21:21:47.083834 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:21:47.084111 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 21:21:47.084346 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 21:21:47.084380 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:21:47.084403 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:21:47.084425 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:21:47.084445 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:21:47.084466 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:21:47.084489 kernel: iommu: Default domain type: Translated Jan 13 21:21:47.084518 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:21:47.084540 kernel: efivars: Registered efivars operations Jan 13 21:21:47.084560 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:21:47.084584 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:21:47.084606 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 21:21:47.084627 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 21:21:47.084648 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 21:21:47.084669 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 21:21:47.084690 kernel: vgaarb: loaded Jan 13 21:21:47.084712 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:21:47.084732 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:21:47.084754 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:21:47.084781 kernel: pnp: PnP ACPI init Jan 13 21:21:47.084800 kernel: pnp: PnP ACPI: found 7 devices Jan 13 21:21:47.084820 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:21:47.084841 kernel: NET: Registered PF_INET protocol family Jan 13 21:21:47.084859 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:21:47.084882 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 21:21:47.084908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:21:47.084934 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:21:47.084955 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 21:21:47.085030 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 21:21:47.085052 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:21:47.085073 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:21:47.085094 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:21:47.085115 kernel: NET: Registered PF_XDP protocol family Jan 13 21:21:47.085352 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:21:47.085565 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:21:47.085766 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:21:47.085997 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 21:21:47.086229 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:21:47.086256 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:21:47.086277 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:21:47.086298 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 21:21:47.086320 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:21:47.086341 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:21:47.086362 kernel: clocksource: Switched to clocksource tsc Jan 13 21:21:47.086390 kernel: Initialise system trusted keyrings Jan 13 21:21:47.086410 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 21:21:47.086431 kernel: Key type asymmetric registered Jan 13 21:21:47.086452 kernel: Asymmetric key parser 'x509' registered Jan 13 21:21:47.086472 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:21:47.086518 kernel: io scheduler mq-deadline registered Jan 13 21:21:47.086540 kernel: io scheduler kyber registered Jan 13 21:21:47.086561 kernel: io scheduler bfq registered Jan 13 21:21:47.086582 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:21:47.086609 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:21:47.086836 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 21:21:47.086862 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 21:21:47.087129 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 21:21:47.087155 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:21:47.087377 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 21:21:47.087402 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:21:47.087424 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:21:47.087445 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 21:21:47.087473 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 21:21:47.087503 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 21:21:47.087740 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 21:21:47.087767 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:21:47.087788 kernel: i8042: Warning: Keylock active Jan 13 21:21:47.087809 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:21:47.087830 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:21:47.088079 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:21:47.088299 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:21:47.088514 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:21:46 UTC (1736803306) Jan 13 21:21:47.088723 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:21:47.088748 kernel: intel_pstate: CPU model not supported Jan 13 21:21:47.088769 kernel: pstore: Using crash dump compression: deflate Jan 13 21:21:47.088790 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:21:47.088811 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:21:47.088832 kernel: Segment Routing with IPv6 Jan 13 21:21:47.088859 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:21:47.088880 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:21:47.088902 kernel: Key type dns_resolver registered Jan 13 21:21:47.088922 kernel: IPI shorthand broadcast: enabled Jan 13 21:21:47.088943 kernel: sched_clock: Marking stable (798004017, 163521727)->(980597555, -19071811) Jan 13 21:21:47.088989 kernel: registered taskstats version 1 Jan 13 21:21:47.089011 kernel: Loading compiled-in X.509 certificates Jan 13 21:21:47.089032 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:21:47.089052 kernel: Key type .fscrypt registered Jan 13 21:21:47.089076 kernel: Key type fscrypt-provisioning registered Jan 13 21:21:47.089096 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:21:47.089117 kernel: ima: No architecture policies found Jan 13 21:21:47.089139 kernel: clk: Disabling unused clocks Jan 13 21:21:47.089159 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:21:47.089178 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:21:47.089198 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:21:47.089220 kernel: Run /init as init process Jan 13 21:21:47.089246 kernel: with arguments: Jan 13 21:21:47.089266 kernel: /init Jan 13 21:21:47.089287 kernel: with environment: Jan 13 21:21:47.089307 kernel: HOME=/ Jan 13 21:21:47.089327 kernel: TERM=linux Jan 13 21:21:47.089349 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:21:47.089371 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 13 21:21:47.089396 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:21:47.089425 systemd[1]: Detected virtualization google. Jan 13 21:21:47.089446 systemd[1]: Detected architecture x86-64. Jan 13 21:21:47.089467 systemd[1]: Running in initrd. Jan 13 21:21:47.089486 systemd[1]: No hostname configured, using default hostname. Jan 13 21:21:47.089513 systemd[1]: Hostname set to . Jan 13 21:21:47.089534 systemd[1]: Initializing machine ID from random generator. Jan 13 21:21:47.089552 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:21:47.089573 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:47.089601 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:47.089624 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:21:47.089646 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:21:47.089668 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:21:47.089689 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:21:47.089711 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:21:47.089731 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:21:47.089758 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:47.089780 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:47.089825 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:21:47.089852 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:21:47.089874 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:21:47.089896 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:21:47.089923 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:21:47.089943 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:21:47.089990 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:21:47.090013 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:21:47.090036 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:47.090057 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:47.090080 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:47.090103 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:21:47.090131 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:21:47.090153 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:21:47.090175 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:21:47.090197 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:21:47.090220 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:21:47.090242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:21:47.090264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:47.090324 systemd-journald[183]: Collecting audit messages is disabled. Jan 13 21:21:47.090374 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:21:47.090398 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:47.090419 systemd-journald[183]: Journal started Jan 13 21:21:47.090465 systemd-journald[183]: Runtime Journal (/run/log/journal/2865670be8494c569d91bad95ecb625a) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:21:47.094803 systemd-modules-load[184]: Inserted module 'overlay' Jan 13 21:21:47.097262 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:21:47.104103 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:21:47.121174 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:21:47.132960 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:21:47.140481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:47.146558 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:21:47.146595 kernel: Bridge firewalling registered Jan 13 21:21:47.145713 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 13 21:21:47.150468 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:47.157263 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:47.171280 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:47.174603 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:21:47.183849 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:21:47.187034 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:47.205230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:47.209187 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:21:47.212442 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:47.225221 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:47.236195 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:21:47.264178 dracut-cmdline[217]: dracut-dracut-053 Jan 13 21:21:47.267330 systemd-resolved[213]: Positive Trust Anchors: Jan 13 21:21:47.267350 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:21:47.276800 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:47.267422 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:21:47.273415 systemd-resolved[213]: Defaulting to hostname 'linux'. Jan 13 21:21:47.275503 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:21:47.280207 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:47.372013 kernel: SCSI subsystem initialized Jan 13 21:21:47.383011 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:21:47.394001 kernel: iscsi: registered transport (tcp) Jan 13 21:21:47.417013 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:21:47.417084 kernel: QLogic iSCSI HBA Driver Jan 13 21:21:47.473305 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:21:47.480205 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:21:47.520189 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:21:47.520266 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:21:47.520298 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:21:47.566036 kernel: raid6: avx2x4 gen() 17805 MB/s Jan 13 21:21:47.583014 kernel: raid6: avx2x2 gen() 17811 MB/s Jan 13 21:21:47.600357 kernel: raid6: avx2x1 gen() 13587 MB/s Jan 13 21:21:47.600409 kernel: raid6: using algorithm avx2x2 gen() 17811 MB/s Jan 13 21:21:47.618366 kernel: raid6: .... xor() 17572 MB/s, rmw enabled Jan 13 21:21:47.618442 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:21:47.641008 kernel: xor: automatically using best checksumming function avx Jan 13 21:21:47.813012 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:21:47.827734 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:21:47.835263 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:47.856998 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 13 21:21:47.864392 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:47.874395 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:21:47.905261 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 13 21:21:47.943434 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:21:47.950235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:21:48.051023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:48.061223 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:21:48.099517 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:21:48.110598 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:21:48.119091 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:48.124347 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:21:48.135183 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:21:48.174611 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:21:48.197992 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:21:48.199002 kernel: scsi host0: Virtio SCSI HBA Jan 13 21:21:48.209027 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 21:21:48.214557 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:21:48.215834 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:48.224235 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:48.225755 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:21:48.226236 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:48.232254 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:48.244657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:48.267523 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:21:48.267562 kernel: AES CTR mode by8 optimization enabled Jan 13 21:21:48.306536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:48.317191 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:48.323100 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 21:21:48.338138 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 21:21:48.338396 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 21:21:48.338642 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 21:21:48.338868 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:21:48.339119 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:21:48.339148 kernel: GPT:17805311 != 25165823 Jan 13 21:21:48.339171 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:21:48.339201 kernel: GPT:17805311 != 25165823 Jan 13 21:21:48.339224 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:21:48.339248 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:48.339272 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 21:21:48.360153 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:48.399759 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (454) Jan 13 21:21:48.401342 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 21:21:48.404105 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (453) Jan 13 21:21:48.431324 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 21:21:48.437578 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 21:21:48.437794 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 21:21:48.453734 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:21:48.467180 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:21:48.479311 disk-uuid[550]: Primary Header is updated. Jan 13 21:21:48.479311 disk-uuid[550]: Secondary Entries is updated. Jan 13 21:21:48.479311 disk-uuid[550]: Secondary Header is updated. Jan 13 21:21:48.490998 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:48.498602 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:48.518989 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:49.527999 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:49.528074 disk-uuid[551]: The operation has completed successfully. Jan 13 21:21:49.592449 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:21:49.592592 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:21:49.631180 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:21:49.661781 sh[568]: Success Jan 13 21:21:49.685189 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:21:49.761297 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:21:49.768730 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:21:49.795418 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:21:49.835884 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:21:49.835986 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:49.836015 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:21:49.845321 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:21:49.852138 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:21:49.883041 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:21:49.889333 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:21:49.890245 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:21:49.895305 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:21:49.942884 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:49.942938 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:49.949984 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:49.967887 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:49.967938 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:49.977180 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:21:50.011122 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:50.004402 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:21:50.019197 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:21:50.149890 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:21:50.178349 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:21:50.223345 ignition[635]: Ignition 2.19.0 Jan 13 21:21:50.223787 ignition[635]: Stage: fetch-offline Jan 13 21:21:50.226192 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:21:50.223869 ignition[635]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:50.239746 systemd-networkd[752]: lo: Link UP Jan 13 21:21:50.223890 ignition[635]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:50.239751 systemd-networkd[752]: lo: Gained carrier Jan 13 21:21:50.224150 ignition[635]: parsed url from cmdline: "" Jan 13 21:21:50.241658 systemd-networkd[752]: Enumeration completed Jan 13 21:21:50.224159 ignition[635]: no config URL provided Jan 13 21:21:50.242194 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:50.224171 ignition[635]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:21:50.242202 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:21:50.224189 ignition[635]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:21:50.244343 systemd-networkd[752]: eth0: Link UP Jan 13 21:21:50.224202 ignition[635]: failed to fetch config: resource requires networking Jan 13 21:21:50.244351 systemd-networkd[752]: eth0: Gained carrier Jan 13 21:21:50.224697 ignition[635]: Ignition finished successfully Jan 13 21:21:50.244365 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:50.307878 ignition[761]: Ignition 2.19.0 Jan 13 21:21:50.255047 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.49/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:21:50.307888 ignition[761]: Stage: fetch Jan 13 21:21:50.256634 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:21:50.308160 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:50.266790 systemd[1]: Reached target network.target - Network. Jan 13 21:21:50.308178 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:50.288156 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:21:50.308325 ignition[761]: parsed url from cmdline: "" Jan 13 21:21:50.317121 unknown[761]: fetched base config from "system" Jan 13 21:21:50.308333 ignition[761]: no config URL provided Jan 13 21:21:50.317132 unknown[761]: fetched base config from "system" Jan 13 21:21:50.308342 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:21:50.317141 unknown[761]: fetched user config from "gcp" Jan 13 21:21:50.308356 ignition[761]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:21:50.320448 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:21:50.308383 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 21:21:50.344196 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:21:50.311569 ignition[761]: GET result: OK Jan 13 21:21:50.386408 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:21:50.311669 ignition[761]: parsing config with SHA512: 7a779835f26662e2dd95e316bc4bbac804d48343e13890cd9a4e52d02b1a1813b5c28e152a07c56a227529eed7d4c74d53cf420e12973a14045f6395c0a4f6a0 Jan 13 21:21:50.402128 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:21:50.317638 ignition[761]: fetch: fetch complete Jan 13 21:21:50.459528 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:21:50.317645 ignition[761]: fetch: fetch passed Jan 13 21:21:50.467633 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:21:50.317691 ignition[761]: Ignition finished successfully Jan 13 21:21:50.487272 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:21:50.380193 ignition[767]: Ignition 2.19.0 Jan 13 21:21:50.501272 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:21:50.380203 ignition[767]: Stage: kargs Jan 13 21:21:50.520308 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:21:50.380388 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:50.535273 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:21:50.380400 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:50.558137 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:21:50.381382 ignition[767]: kargs: kargs passed Jan 13 21:21:50.381435 ignition[767]: Ignition finished successfully Jan 13 21:21:50.429090 ignition[772]: Ignition 2.19.0 Jan 13 21:21:50.429101 ignition[772]: Stage: disks Jan 13 21:21:50.429293 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:50.429305 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:50.430322 ignition[772]: disks: disks passed Jan 13 21:21:50.430381 ignition[772]: Ignition finished successfully Jan 13 21:21:50.614017 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:21:50.809098 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:21:50.814126 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:21:50.969010 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:21:50.969450 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:21:50.970284 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:21:51.000084 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:21:51.012180 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:21:51.034475 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:21:51.101258 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (789) Jan 13 21:21:51.101295 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:51.101312 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:51.101326 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:51.101341 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:51.101366 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:51.034555 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:21:51.034595 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:21:51.036518 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:21:51.110586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:21:51.142202 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:21:51.265248 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:21:51.276116 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:21:51.286115 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:21:51.296076 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:21:51.419403 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:21:51.425222 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:21:51.450346 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:21:51.479125 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:51.459195 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:21:51.504361 ignition[905]: INFO : Ignition 2.19.0 Jan 13 21:21:51.504361 ignition[905]: INFO : Stage: mount Jan 13 21:21:51.519105 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:51.519105 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:51.519105 ignition[905]: INFO : mount: mount passed Jan 13 21:21:51.519105 ignition[905]: INFO : Ignition finished successfully Jan 13 21:21:51.507552 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:21:51.529554 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:21:51.563092 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:21:51.798186 systemd-networkd[752]: eth0: Gained IPv6LL Jan 13 21:21:51.982251 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:21:52.030127 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Jan 13 21:21:52.030168 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:52.030185 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:52.030202 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:52.043056 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:52.043134 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:52.046183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:21:52.081409 ignition[934]: INFO : Ignition 2.19.0 Jan 13 21:21:52.081409 ignition[934]: INFO : Stage: files Jan 13 21:21:52.096103 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:52.096103 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:52.096103 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:21:52.096103 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:21:52.096103 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:21:52.096103 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:21:52.096103 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:21:52.096103 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:21:52.094923 unknown[934]: wrote ssh authorized keys file for user: core Jan 13 21:21:52.196073 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:52.196073 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:21:53.225232 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:21:53.398294 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:21:53.415102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:22:23.401953 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET error: Get "https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw": dial tcp 140.82.112.4:443: i/o timeout Jan 13 21:22:23.602422 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #2 Jan 13 21:22:23.891449 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:22:24.397469 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:22:24.397469 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:22:24.438107 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:22:24.438107 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:22:24.438107 ignition[934]: INFO : files: files passed Jan 13 21:22:24.438107 ignition[934]: INFO : Ignition finished successfully Jan 13 21:22:24.402818 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:22:24.434261 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:22:24.461136 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:22:24.514395 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:22:24.652105 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:24.652105 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:24.514511 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:22:24.721085 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:24.537369 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:22:24.552469 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:22:24.582157 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:22:24.649470 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:22:24.649590 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:22:24.663327 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:22:24.678232 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:22:24.710260 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:22:24.716154 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:22:24.788696 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:22:24.815205 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:22:24.832742 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:22:24.847230 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:22:24.870298 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:22:24.889248 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:22:24.889431 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:22:24.922276 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:22:24.940231 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:22:24.958326 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:22:24.976285 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:22:24.995282 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:22:25.018286 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:22:25.038279 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:22:25.057302 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:22:25.077269 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:22:25.097260 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:22:25.115222 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:22:25.115423 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:22:25.146346 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:22:25.166246 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:22:25.187217 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:22:25.187394 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:22:25.205280 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:22:25.205497 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:22:25.233280 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:22:25.329075 ignition[987]: INFO : Ignition 2.19.0 Jan 13 21:22:25.329075 ignition[987]: INFO : Stage: umount Jan 13 21:22:25.329075 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:25.329075 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:22:25.329075 ignition[987]: INFO : umount: umount passed Jan 13 21:22:25.329075 ignition[987]: INFO : Ignition finished successfully Jan 13 21:22:25.233511 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:22:25.254277 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:22:25.254465 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:22:25.272296 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:22:25.323271 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:22:25.337108 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:22:25.337429 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:22:25.385494 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:22:25.385664 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:22:25.427036 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:22:25.428202 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:22:25.428336 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:22:25.434775 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:22:25.434885 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:22:25.460737 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:22:25.460854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:22:25.486183 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:22:25.486261 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:22:25.506169 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:22:25.506255 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:22:25.526166 systemd[1]: Stopped target network.target - Network. Jan 13 21:22:25.543111 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:22:25.543223 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:22:25.560152 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:22:25.575098 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:22:25.575183 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:22:25.593203 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:22:25.609079 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:22:25.627144 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:22:25.627219 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:22:25.645145 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:22:25.645219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:22:25.663120 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:22:25.663214 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:22:25.681128 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:22:25.681209 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:22:25.699356 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:22:25.706035 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 13 21:22:25.715416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:22:25.730641 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:22:25.730779 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:22:25.749040 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:22:25.749232 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:22:25.765697 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:22:25.765840 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:22:25.784246 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:22:25.784313 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:22:25.799281 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:22:25.799339 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:22:25.828072 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:22:25.838219 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:22:25.838284 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:22:25.879178 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:22:25.879267 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:22:25.897127 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:22:25.897207 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:22:25.917131 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:22:25.917213 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:22:25.936302 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:22:26.341082 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 13 21:22:25.950501 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:22:25.950748 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:22:25.974509 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:22:25.974646 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:22:25.994138 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:22:25.994199 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:22:26.014104 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:22:26.014180 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:22:26.041072 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:22:26.041154 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:22:26.068077 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:22:26.068183 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:22:26.102164 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:22:26.104229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:22:26.104290 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:22:26.140307 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:22:26.140364 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:22:26.169246 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:22:26.169307 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:22:26.179287 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:22:26.179342 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:26.197735 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:22:26.197856 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:22:26.215699 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:22:26.215823 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:22:26.235329 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:22:26.257166 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:22:26.291942 systemd[1]: Switching root. Jan 13 21:22:26.499681 systemd-journald[183]: Journal stopped Jan 13 21:22:28.689758 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:22:28.689804 kernel: SELinux: policy capability open_perms=1 Jan 13 21:22:28.689826 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:22:28.689844 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:22:28.689861 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:22:28.689879 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:22:28.689899 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:22:28.689921 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:22:28.689939 kernel: audit: type=1403 audit(1736803346.663:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:22:28.689961 systemd[1]: Successfully loaded SELinux policy in 90.808ms. Jan 13 21:22:28.690000 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.788ms. Jan 13 21:22:28.690022 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:22:28.690042 systemd[1]: Detected virtualization google. Jan 13 21:22:28.690061 systemd[1]: Detected architecture x86-64. Jan 13 21:22:28.690087 systemd[1]: Detected first boot. Jan 13 21:22:28.690112 systemd[1]: Initializing machine ID from random generator. Jan 13 21:22:28.690133 zram_generator::config[1028]: No configuration found. Jan 13 21:22:28.690155 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:22:28.690175 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:22:28.690207 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:22:28.690228 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:22:28.690250 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:22:28.690270 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:22:28.690291 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:22:28.690313 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:22:28.690334 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:22:28.690360 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:22:28.690381 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:22:28.690402 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:22:28.690423 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:22:28.690445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:22:28.690466 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:22:28.690487 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:22:28.690508 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:22:28.690533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:22:28.690554 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:22:28.690576 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:22:28.690598 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:22:28.690619 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:22:28.690640 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:22:28.690667 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:22:28.690689 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:22:28.690711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:22:28.690736 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:22:28.690758 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:22:28.690779 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:22:28.690801 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:22:28.690823 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:22:28.690845 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:22:28.690867 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:22:28.690893 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:22:28.690916 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:22:28.690938 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:22:28.690960 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:22:28.691004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:28.691030 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:22:28.691052 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:22:28.691076 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:22:28.691099 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:22:28.691122 systemd[1]: Reached target machines.target - Containers. Jan 13 21:22:28.691145 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:22:28.691167 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:22:28.691190 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:22:28.691225 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:22:28.691248 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:22:28.691269 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:22:28.691292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:22:28.691314 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:22:28.691336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:22:28.691358 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:22:28.691380 kernel: ACPI: bus type drm_connector registered Jan 13 21:22:28.691405 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:22:28.691427 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:22:28.691448 kernel: fuse: init (API version 7.39) Jan 13 21:22:28.691468 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:22:28.691490 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:22:28.691513 kernel: loop: module loaded Jan 13 21:22:28.691533 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:22:28.691584 systemd-journald[1115]: Collecting audit messages is disabled. Jan 13 21:22:28.691632 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:22:28.691655 systemd-journald[1115]: Journal started Jan 13 21:22:28.691697 systemd-journald[1115]: Runtime Journal (/run/log/journal/f7e71f46b9a5437182288d99ff6c8929) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:22:27.526849 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:22:27.546895 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 21:22:27.547428 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:22:28.720006 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:22:28.746998 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:22:28.779012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:22:28.800008 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:22:28.800082 systemd[1]: Stopped verity-setup.service. Jan 13 21:22:28.826004 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:28.836021 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:22:28.847603 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:22:28.858332 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:22:28.869283 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:22:28.879297 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:22:28.889261 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:22:28.899231 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:22:28.910366 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:22:28.921371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:22:28.933397 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:22:28.933621 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:22:28.945379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:22:28.945601 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:22:28.957379 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:22:28.957597 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:22:28.967371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:22:28.967590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:22:28.979413 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:22:28.979631 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:22:28.989398 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:22:28.989616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:22:28.999392 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:22:29.009366 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:22:29.020372 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:22:29.032381 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:22:29.056528 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:22:29.072110 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:22:29.093517 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:22:29.103692 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:22:29.103888 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:22:29.115335 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:22:29.131225 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:22:29.154878 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:22:29.166330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:22:29.174301 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:22:29.192311 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:22:29.203237 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:22:29.205395 systemd-journald[1115]: Time spent on flushing to /var/log/journal/f7e71f46b9a5437182288d99ff6c8929 is 117.212ms for 928 entries. Jan 13 21:22:29.205395 systemd-journald[1115]: System Journal (/var/log/journal/f7e71f46b9a5437182288d99ff6c8929) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:22:29.353547 systemd-journald[1115]: Received client request to flush runtime journal. Jan 13 21:22:29.353638 kernel: loop0: detected capacity change from 0 to 140768 Jan 13 21:22:29.217307 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:22:29.227154 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:22:29.236228 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:22:29.257198 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:22:29.279257 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:22:29.304623 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:22:29.319810 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:22:29.330276 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:22:29.346726 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:22:29.358697 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:22:29.369093 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:22:29.380477 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:22:29.382795 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jan 13 21:22:29.382827 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jan 13 21:22:29.407058 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:22:29.423877 udevadm[1149]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:22:29.426173 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:22:29.442991 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:22:29.450079 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:22:29.477098 kernel: loop1: detected capacity change from 0 to 54824 Jan 13 21:22:29.477041 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:22:29.501956 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:22:29.503019 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:22:29.549031 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 21:22:29.577065 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:22:29.603116 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:22:29.664416 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 13 21:22:29.664894 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 13 21:22:29.674256 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:22:29.685081 kernel: loop3: detected capacity change from 0 to 142488 Jan 13 21:22:29.781128 kernel: loop4: detected capacity change from 0 to 140768 Jan 13 21:22:29.833012 kernel: loop5: detected capacity change from 0 to 54824 Jan 13 21:22:29.871421 kernel: loop6: detected capacity change from 0 to 211296 Jan 13 21:22:29.914227 kernel: loop7: detected capacity change from 0 to 142488 Jan 13 21:22:29.959154 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 13 21:22:29.965151 (sd-merge)[1173]: Merged extensions into '/usr'. Jan 13 21:22:29.972841 systemd[1]: Reloading requested from client PID 1146 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:22:29.972867 systemd[1]: Reloading... Jan 13 21:22:30.111002 zram_generator::config[1197]: No configuration found. Jan 13 21:22:30.361020 ldconfig[1141]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:22:30.414070 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:22:30.520048 systemd[1]: Reloading finished in 546 ms. Jan 13 21:22:30.549858 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:22:30.560568 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:22:30.581223 systemd[1]: Starting ensure-sysext.service... Jan 13 21:22:30.598130 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:22:30.617674 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:22:30.617878 systemd[1]: Reloading... Jan 13 21:22:30.652552 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:22:30.653234 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:22:30.654448 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:22:30.654857 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 13 21:22:30.655017 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 13 21:22:30.660798 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:22:30.660823 systemd-tmpfiles[1240]: Skipping /boot Jan 13 21:22:30.680625 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:22:30.680652 systemd-tmpfiles[1240]: Skipping /boot Jan 13 21:22:30.735000 zram_generator::config[1267]: No configuration found. Jan 13 21:22:30.874063 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:22:30.938422 systemd[1]: Reloading finished in 319 ms. Jan 13 21:22:30.958469 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:22:30.974614 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:22:30.995232 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:22:31.012269 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:22:31.029385 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:22:31.047929 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:22:31.066184 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:22:31.084173 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:22:31.106473 augenrules[1330]: No rules Jan 13 21:22:31.111849 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:22:31.123783 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:22:31.142950 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:22:31.162436 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 13 21:22:31.175416 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:22:31.187547 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:22:31.199604 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:31.200527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:22:31.212091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:22:31.232098 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:22:31.249848 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:22:31.250128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:22:31.258299 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:22:31.263095 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:31.264833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:22:31.285258 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:22:31.298877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:22:31.300302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:22:31.311866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:22:31.313297 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:22:31.326082 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:22:31.326319 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:22:31.336797 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:22:31.374028 systemd[1]: Finished ensure-sysext.service. Jan 13 21:22:31.376956 systemd-resolved[1322]: Positive Trust Anchors: Jan 13 21:22:31.377919 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:22:31.378402 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:22:31.402032 systemd-resolved[1322]: Defaulting to hostname 'linux'. Jan 13 21:22:31.407677 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:31.408355 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:22:31.423153 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:22:31.441203 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:22:31.459174 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:22:31.481916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:22:31.499210 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:22:31.507207 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:22:31.519205 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:22:31.530002 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1369) Jan 13 21:22:31.539131 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:22:31.549145 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:22:31.549188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:31.549755 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:22:31.559811 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:22:31.561315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:22:31.572578 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:22:31.573517 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:22:31.584509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:22:31.584761 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:22:31.605030 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:22:31.617005 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 13 21:22:31.619235 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:22:31.619480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:22:31.645610 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:22:31.645712 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:22:31.646564 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:22:31.656361 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:22:31.688841 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 13 21:22:31.688931 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:22:31.703769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:22:31.741539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:22:31.749026 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:22:31.754989 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 21:22:31.768268 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 13 21:22:31.784208 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:22:31.795086 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:22:31.795185 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:22:31.806585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:22:31.817789 systemd-networkd[1386]: lo: Link UP Jan 13 21:22:31.820011 systemd-networkd[1386]: lo: Gained carrier Jan 13 21:22:31.824211 systemd-networkd[1386]: Enumeration completed Jan 13 21:22:31.826236 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:22:31.826620 systemd[1]: Reached target network.target - Network. Jan 13 21:22:31.830864 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:22:31.830877 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:22:31.834177 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:22:31.834791 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:22:31.835144 systemd-networkd[1386]: eth0: Link UP Jan 13 21:22:31.835160 systemd-networkd[1386]: eth0: Gained carrier Jan 13 21:22:31.835184 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:22:31.845239 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:22:31.848077 systemd-networkd[1386]: eth0: DHCPv4 address 10.128.0.49/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:22:31.874087 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:22:31.882636 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:22:31.887594 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 13 21:22:31.918507 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:22:31.919243 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:22:31.924422 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:22:31.940074 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:22:31.968127 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:31.979370 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:22:31.989244 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:22:32.000119 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:22:32.011345 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:22:32.021255 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:22:32.032088 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:22:32.043063 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:22:32.043121 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:22:32.051096 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:22:32.059580 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:22:32.070619 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:22:32.082233 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:22:32.093130 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:22:32.104319 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:22:32.114777 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:22:32.125089 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:22:32.133167 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:22:32.133227 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:22:32.138118 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:22:32.160216 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:22:32.176945 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:22:32.191552 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:22:32.224602 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:22:32.234097 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:22:32.243213 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:22:32.243957 jq[1431]: false Jan 13 21:22:32.260222 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:22:32.278233 extend-filesystems[1432]: Found loop4 Jan 13 21:22:32.278233 extend-filesystems[1432]: Found loop5 Jan 13 21:22:32.278233 extend-filesystems[1432]: Found loop6 Jan 13 21:22:32.278233 extend-filesystems[1432]: Found loop7 Jan 13 21:22:32.278233 extend-filesystems[1432]: Found sda Jan 13 21:22:32.278233 extend-filesystems[1432]: Found sda1 Jan 13 21:22:32.278233 extend-filesystems[1432]: Found sda2 Jan 13 21:22:32.278233 extend-filesystems[1432]: Found sda3 Jan 13 21:22:32.278233 extend-filesystems[1432]: Found usr Jan 13 21:22:32.278233 extend-filesystems[1432]: Found sda4 Jan 13 21:22:32.278233 extend-filesystems[1432]: Found sda6 Jan 13 21:22:32.278233 extend-filesystems[1432]: Found sda7 Jan 13 21:22:32.278233 extend-filesystems[1432]: Found sda9 Jan 13 21:22:32.278233 extend-filesystems[1432]: Checking size of /dev/sda9 Jan 13 21:22:32.481684 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 13 21:22:32.481743 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 13 21:22:32.481773 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1352) Jan 13 21:22:32.278106 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:22:32.483028 extend-filesystems[1432]: Resized partition /dev/sda9 Jan 13 21:22:32.504449 coreos-metadata[1429]: Jan 13 21:22:32.286 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 13 21:22:32.504449 coreos-metadata[1429]: Jan 13 21:22:32.303 INFO Fetch successful Jan 13 21:22:32.504449 coreos-metadata[1429]: Jan 13 21:22:32.303 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 13 21:22:32.504449 coreos-metadata[1429]: Jan 13 21:22:32.305 INFO Fetch successful Jan 13 21:22:32.504449 coreos-metadata[1429]: Jan 13 21:22:32.305 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 13 21:22:32.504449 coreos-metadata[1429]: Jan 13 21:22:32.305 INFO Fetch successful Jan 13 21:22:32.504449 coreos-metadata[1429]: Jan 13 21:22:32.305 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 13 21:22:32.504449 coreos-metadata[1429]: Jan 13 21:22:32.307 INFO Fetch successful Jan 13 21:22:32.329461 dbus-daemon[1430]: [system] SELinux support is enabled Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: ---------------------------------------------------- Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: corporation. Support and training for ntp-4 are Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: available at https://www.nwtime.org/support Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: ---------------------------------------------------- Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: proto: precision = 0.077 usec (-24) Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: basedate set to 2025-01-01 Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: gps base set to 2025-01-05 (week 2348) Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: Listen normally on 3 eth0 10.128.0.49:123 Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: Listen normally on 4 lo [::1]:123 Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: bind(21) AF_INET6 fe80::4001:aff:fe80:31%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:31%2#123 Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: failed to init interface for address fe80::4001:aff:fe80:31%2 Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: Listening on routing socket on fd #21 for interface updates Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:22:32.505473 ntpd[1437]: 13 Jan 21:22:32 ntpd[1437]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:22:32.295200 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:22:32.509897 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:22:32.509897 extend-filesystems[1453]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 21:22:32.509897 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 13 21:22:32.509897 extend-filesystems[1453]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 13 21:22:32.335139 dbus-daemon[1430]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1386 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:22:32.319715 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:22:32.567636 extend-filesystems[1432]: Resized filesystem in /dev/sda9 Jan 13 21:22:32.356803 ntpd[1437]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:22:32.403262 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:22:32.356835 ntpd[1437]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:22:32.417665 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 13 21:22:32.356851 ntpd[1437]: ---------------------------------------------------- Jan 13 21:22:32.419185 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:22:32.356865 ntpd[1437]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:22:32.588122 update_engine[1461]: I20250113 21:22:32.524569 1461 main.cc:92] Flatcar Update Engine starting Jan 13 21:22:32.588122 update_engine[1461]: I20250113 21:22:32.532824 1461 update_check_scheduler.cc:74] Next update check in 7m30s Jan 13 21:22:32.427261 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:22:32.356879 ntpd[1437]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:22:32.481249 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:22:32.356893 ntpd[1437]: corporation. Support and training for ntp-4 are Jan 13 21:22:32.588946 jq[1463]: true Jan 13 21:22:32.496207 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:22:32.356907 ntpd[1437]: available at https://www.nwtime.org/support Jan 13 21:22:32.534499 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:22:32.356931 ntpd[1437]: ---------------------------------------------------- Jan 13 21:22:32.534750 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:22:32.358745 ntpd[1437]: proto: precision = 0.077 usec (-24) Jan 13 21:22:32.535272 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:22:32.360002 ntpd[1437]: basedate set to 2025-01-01 Jan 13 21:22:32.535501 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:22:32.360066 ntpd[1437]: gps base set to 2025-01-05 (week 2348) Jan 13 21:22:32.382622 ntpd[1437]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:22:32.382701 ntpd[1437]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:22:32.382945 ntpd[1437]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:22:32.383028 ntpd[1437]: Listen normally on 3 eth0 10.128.0.49:123 Jan 13 21:22:32.383092 ntpd[1437]: Listen normally on 4 lo [::1]:123 Jan 13 21:22:32.383152 ntpd[1437]: bind(21) AF_INET6 fe80::4001:aff:fe80:31%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:22:32.383193 ntpd[1437]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:31%2#123 Jan 13 21:22:32.383213 ntpd[1437]: failed to init interface for address fe80::4001:aff:fe80:31%2 Jan 13 21:22:32.383271 ntpd[1437]: Listening on routing socket on fd #21 for interface updates Jan 13 21:22:32.389663 ntpd[1437]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:22:32.389732 ntpd[1437]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:22:32.602425 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:22:32.602658 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:22:32.609810 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:22:32.610421 systemd-logind[1457]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:22:32.610464 systemd-logind[1457]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 13 21:22:32.610497 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:22:32.610963 systemd-logind[1457]: New seat seat0. Jan 13 21:22:32.621082 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:22:32.632437 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:22:32.637326 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:22:32.665997 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:22:32.674225 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:22:32.679917 jq[1475]: true Jan 13 21:22:32.694416 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:22:32.739303 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:22:32.764258 tar[1467]: linux-amd64/helm Jan 13 21:22:32.790833 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:22:32.802115 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:22:32.806356 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:22:32.822712 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:22:32.843577 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:22:32.852525 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:22:32.859331 systemd[1]: Started sshd@0-10.128.0.49:22-147.75.109.163:46936.service - OpenSSH per-connection server daemon (147.75.109.163:46936). Jan 13 21:22:32.882383 systemd[1]: Starting sshkeys.service... Jan 13 21:22:32.889176 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:22:32.889484 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:22:32.912126 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:22:32.922163 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:22:32.922417 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:22:32.943311 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:22:32.965428 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:22:32.965994 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:22:32.994234 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:22:33.015700 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:22:33.042654 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:22:33.115087 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:22:33.137483 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:22:33.160485 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:22:33.171542 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:22:33.258081 coreos-metadata[1522]: Jan 13 21:22:33.257 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 13 21:22:33.260989 coreos-metadata[1522]: Jan 13 21:22:33.260 INFO Fetch failed with 404: resource not found Jan 13 21:22:33.260989 coreos-metadata[1522]: Jan 13 21:22:33.260 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 13 21:22:33.261629 coreos-metadata[1522]: Jan 13 21:22:33.261 INFO Fetch successful Jan 13 21:22:33.261629 coreos-metadata[1522]: Jan 13 21:22:33.261 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 13 21:22:33.267014 coreos-metadata[1522]: Jan 13 21:22:33.266 INFO Fetch failed with 404: resource not found Jan 13 21:22:33.267123 coreos-metadata[1522]: Jan 13 21:22:33.267 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 13 21:22:33.268486 coreos-metadata[1522]: Jan 13 21:22:33.268 INFO Fetch failed with 404: resource not found Jan 13 21:22:33.268486 coreos-metadata[1522]: Jan 13 21:22:33.268 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 13 21:22:33.269924 coreos-metadata[1522]: Jan 13 21:22:33.269 INFO Fetch successful Jan 13 21:22:33.270214 systemd-networkd[1386]: eth0: Gained IPv6LL Jan 13 21:22:33.279076 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:22:33.279322 unknown[1522]: wrote ssh authorized keys file for user: core Jan 13 21:22:33.291366 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:22:33.315518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:33.318697 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:22:33.319720 dbus-daemon[1430]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1513 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:22:33.331457 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:22:33.342711 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:22:33.350903 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 13 21:22:33.365371 init.sh[1535]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 13 21:22:33.365371 init.sh[1535]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 13 21:22:33.365371 init.sh[1535]: + /usr/bin/google_instance_setup Jan 13 21:22:33.375326 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:22:33.402386 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:22:33.416510 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:22:33.444626 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:22:33.444517 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:22:33.461800 systemd[1]: Finished sshkeys.service. Jan 13 21:22:33.475377 containerd[1476]: time="2025-01-13T21:22:33.475271717Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:22:33.517183 polkitd[1545]: Started polkitd version 121 Jan 13 21:22:33.540315 sshd[1509]: Accepted publickey for core from 147.75.109.163 port 46936 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:33.543059 polkitd[1545]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:22:33.543167 polkitd[1545]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:22:33.547290 sshd[1509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:33.548998 polkitd[1545]: Finished loading, compiling and executing 2 rules Jan 13 21:22:33.553250 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:22:33.554024 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:22:33.555238 polkitd[1545]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:22:33.583324 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:22:33.592112 containerd[1476]: time="2025-01-13T21:22:33.590293376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:33.598590 containerd[1476]: time="2025-01-13T21:22:33.598538789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:33.598737 containerd[1476]: time="2025-01-13T21:22:33.598712272Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:22:33.598837 containerd[1476]: time="2025-01-13T21:22:33.598816053Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:22:33.599193 containerd[1476]: time="2025-01-13T21:22:33.599161691Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:22:33.599334 containerd[1476]: time="2025-01-13T21:22:33.599311456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:33.599530 containerd[1476]: time="2025-01-13T21:22:33.599502939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:33.599626 containerd[1476]: time="2025-01-13T21:22:33.599606219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:33.600839 containerd[1476]: time="2025-01-13T21:22:33.599929030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:33.600839 containerd[1476]: time="2025-01-13T21:22:33.599964277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:33.600839 containerd[1476]: time="2025-01-13T21:22:33.600014525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:33.600839 containerd[1476]: time="2025-01-13T21:22:33.600033422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:33.600839 containerd[1476]: time="2025-01-13T21:22:33.600173042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:33.600839 containerd[1476]: time="2025-01-13T21:22:33.600477550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:33.600839 containerd[1476]: time="2025-01-13T21:22:33.600672505Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:33.600839 containerd[1476]: time="2025-01-13T21:22:33.600699428Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:22:33.602079 containerd[1476]: time="2025-01-13T21:22:33.602035032Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:22:33.602385 containerd[1476]: time="2025-01-13T21:22:33.602358530Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:22:33.602721 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:22:33.621055 systemd-logind[1457]: New session 1 of user core. Jan 13 21:22:33.628066 containerd[1476]: time="2025-01-13T21:22:33.627521378Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:22:33.635682 systemd-hostnamed[1513]: Hostname set to (transient) Jan 13 21:22:33.637414 containerd[1476]: time="2025-01-13T21:22:33.632101266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:22:33.637414 containerd[1476]: time="2025-01-13T21:22:33.632173102Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:22:33.637414 containerd[1476]: time="2025-01-13T21:22:33.632212378Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:22:33.637414 containerd[1476]: time="2025-01-13T21:22:33.632245275Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:22:33.637414 containerd[1476]: time="2025-01-13T21:22:33.632449761Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:22:33.638039 systemd-resolved[1322]: System hostname changed to 'ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal'. Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640448257Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640646248Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640675515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640695948Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640720993Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640745698Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640767837Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640791814Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640815675Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640850990Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640880212Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640903603Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:22:33.641258 containerd[1476]: time="2025-01-13T21:22:33.640937330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.640963873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.643874817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.643922253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.643951211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.644200584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.644233014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.644278091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.644304729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.644352114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.644374089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.644395427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.644438307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.644477000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:22:33.644596 containerd[1476]: time="2025-01-13T21:22:33.644540824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.645497 containerd[1476]: time="2025-01-13T21:22:33.644573678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.645497 containerd[1476]: time="2025-01-13T21:22:33.645102025Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:22:33.646662 containerd[1476]: time="2025-01-13T21:22:33.645628227Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:22:33.646662 containerd[1476]: time="2025-01-13T21:22:33.646517643Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:22:33.646662 containerd[1476]: time="2025-01-13T21:22:33.646552105Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:22:33.646662 containerd[1476]: time="2025-01-13T21:22:33.646602774Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:22:33.646662 containerd[1476]: time="2025-01-13T21:22:33.646622108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.648123 containerd[1476]: time="2025-01-13T21:22:33.646916516Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:22:33.648123 containerd[1476]: time="2025-01-13T21:22:33.646947548Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:22:33.648123 containerd[1476]: time="2025-01-13T21:22:33.647146384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:22:33.657215 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:22:33.657902 containerd[1476]: time="2025-01-13T21:22:33.656111212Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:22:33.657902 containerd[1476]: time="2025-01-13T21:22:33.657440585Z" level=info msg="Connect containerd service" Jan 13 21:22:33.657902 containerd[1476]: time="2025-01-13T21:22:33.657523380Z" level=info msg="using legacy CRI server" Jan 13 21:22:33.657902 containerd[1476]: time="2025-01-13T21:22:33.657536482Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:22:33.659233 containerd[1476]: time="2025-01-13T21:22:33.658106923Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:22:33.662123 containerd[1476]: time="2025-01-13T21:22:33.661617070Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:22:33.662123 containerd[1476]: time="2025-01-13T21:22:33.661978643Z" level=info msg="Start subscribing containerd event" Jan 13 21:22:33.662123 containerd[1476]: time="2025-01-13T21:22:33.662065460Z" level=info msg="Start recovering state" Jan 13 21:22:33.664863 containerd[1476]: time="2025-01-13T21:22:33.663927557Z" level=info msg="Start event monitor" Jan 13 21:22:33.664863 containerd[1476]: time="2025-01-13T21:22:33.664449737Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:22:33.664863 containerd[1476]: time="2025-01-13T21:22:33.664568242Z" level=info msg="Start snapshots syncer" Jan 13 21:22:33.664863 containerd[1476]: time="2025-01-13T21:22:33.664593342Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:22:33.664863 containerd[1476]: time="2025-01-13T21:22:33.664608189Z" level=info msg="Start streaming server" Jan 13 21:22:33.665482 containerd[1476]: time="2025-01-13T21:22:33.665454992Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:22:33.665647 containerd[1476]: time="2025-01-13T21:22:33.665627987Z" level=info msg="containerd successfully booted in 0.195866s" Jan 13 21:22:33.668687 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:22:33.689403 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:22:33.727678 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:22:33.989718 systemd[1565]: Queued start job for default target default.target. Jan 13 21:22:33.997566 systemd[1565]: Created slice app.slice - User Application Slice. Jan 13 21:22:33.997869 systemd[1565]: Reached target paths.target - Paths. Jan 13 21:22:33.997900 systemd[1565]: Reached target timers.target - Timers. Jan 13 21:22:34.003178 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:22:34.036701 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:22:34.036893 systemd[1565]: Reached target sockets.target - Sockets. Jan 13 21:22:34.036921 systemd[1565]: Reached target basic.target - Basic System. Jan 13 21:22:34.037221 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:22:34.037517 systemd[1565]: Reached target default.target - Main User Target. Jan 13 21:22:34.037591 systemd[1565]: Startup finished in 292ms. Jan 13 21:22:34.053657 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:22:34.101531 tar[1467]: linux-amd64/LICENSE Jan 13 21:22:34.101531 tar[1467]: linux-amd64/README.md Jan 13 21:22:34.120187 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:22:34.309123 systemd[1]: Started sshd@1-10.128.0.49:22-147.75.109.163:46946.service - OpenSSH per-connection server daemon (147.75.109.163:46946). Jan 13 21:22:34.398728 instance-setup[1538]: INFO Running google_set_multiqueue. Jan 13 21:22:34.417757 instance-setup[1538]: INFO Set channels for eth0 to 2. Jan 13 21:22:34.423266 instance-setup[1538]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 13 21:22:34.425126 instance-setup[1538]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 13 21:22:34.425795 instance-setup[1538]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 13 21:22:34.427560 instance-setup[1538]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 13 21:22:34.428107 instance-setup[1538]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 13 21:22:34.429999 instance-setup[1538]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 13 21:22:34.430513 instance-setup[1538]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 13 21:22:34.432247 instance-setup[1538]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 13 21:22:34.442646 instance-setup[1538]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 21:22:34.451400 instance-setup[1538]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 21:22:34.454323 instance-setup[1538]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 13 21:22:34.454536 instance-setup[1538]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 13 21:22:34.476540 init.sh[1535]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 13 21:22:34.640084 startup-script[1611]: INFO Starting startup scripts. Jan 13 21:22:34.646640 startup-script[1611]: INFO No startup scripts found in metadata. Jan 13 21:22:34.646729 startup-script[1611]: INFO Finished running startup scripts. Jan 13 21:22:34.650540 sshd[1581]: Accepted publickey for core from 147.75.109.163 port 46946 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:34.651859 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:34.661240 systemd-logind[1457]: New session 2 of user core. Jan 13 21:22:34.667043 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:22:34.673498 init.sh[1535]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 13 21:22:34.673498 init.sh[1535]: + daemon_pids=() Jan 13 21:22:34.673498 init.sh[1535]: + for d in accounts clock_skew network Jan 13 21:22:34.673498 init.sh[1535]: + daemon_pids+=($!) Jan 13 21:22:34.673707 init.sh[1535]: + for d in accounts clock_skew network Jan 13 21:22:34.674196 init.sh[1535]: + daemon_pids+=($!) Jan 13 21:22:34.674196 init.sh[1535]: + for d in accounts clock_skew network Jan 13 21:22:34.674305 init.sh[1614]: + /usr/bin/google_accounts_daemon Jan 13 21:22:34.674584 init.sh[1535]: + daemon_pids+=($!) Jan 13 21:22:34.674584 init.sh[1535]: + NOTIFY_SOCKET=/run/systemd/notify Jan 13 21:22:34.674584 init.sh[1535]: + /usr/bin/systemd-notify --ready Jan 13 21:22:34.677555 init.sh[1615]: + /usr/bin/google_clock_skew_daemon Jan 13 21:22:34.677845 init.sh[1616]: + /usr/bin/google_network_daemon Jan 13 21:22:34.700913 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 13 21:22:34.718139 init.sh[1535]: + wait -n 1614 1615 1616 Jan 13 21:22:34.874253 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:34.885693 systemd[1]: sshd@1-10.128.0.49:22-147.75.109.163:46946.service: Deactivated successfully. Jan 13 21:22:34.889323 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:22:34.892245 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:22:34.896041 systemd-logind[1457]: Removed session 2. Jan 13 21:22:34.937239 systemd[1]: Started sshd@2-10.128.0.49:22-147.75.109.163:46956.service - OpenSSH per-connection server daemon (147.75.109.163:46956). Jan 13 21:22:35.060646 google-networking[1616]: INFO Starting Google Networking daemon. Jan 13 21:22:35.087587 google-clock-skew[1615]: INFO Starting Google Clock Skew daemon. Jan 13 21:22:35.097027 google-clock-skew[1615]: INFO Clock drift token has changed: 0. Jan 13 21:22:35.131164 groupadd[1633]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 13 21:22:35.137059 groupadd[1633]: group added to /etc/gshadow: name=google-sudoers Jan 13 21:22:35.184117 groupadd[1633]: new group: name=google-sudoers, GID=1000 Jan 13 21:22:35.215248 google-accounts[1614]: INFO Starting Google Accounts daemon. Jan 13 21:22:35.227928 google-accounts[1614]: WARNING OS Login not installed. Jan 13 21:22:35.229641 google-accounts[1614]: INFO Creating a new user account for 0. Jan 13 21:22:35.236154 init.sh[1643]: useradd: invalid user name '0': use --badname to ignore Jan 13 21:22:35.236622 google-accounts[1614]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 13 21:22:35.253575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:35.261670 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 46956 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:35.263360 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:35.266169 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:22:35.270513 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:22:35.276478 systemd[1]: Startup finished in 969ms (kernel) + 39.890s (initrd) + 8.691s (userspace) = 49.551s. Jan 13 21:22:35.281096 systemd-logind[1457]: New session 3 of user core. Jan 13 21:22:35.002486 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:22:35.036302 systemd-journald[1115]: Time jumped backwards, rotating. Jan 13 21:22:35.012059 systemd-resolved[1322]: Clock change detected. Flushing caches. Jan 13 21:22:35.010712 google-clock-skew[1615]: INFO Synced system time with hardware clock. Jan 13 21:22:35.072658 ntpd[1437]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:31%2]:123 Jan 13 21:22:35.073043 ntpd[1437]: 13 Jan 21:22:35 ntpd[1437]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:31%2]:123 Jan 13 21:22:35.202408 sshd[1627]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:35.208072 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:22:35.209041 systemd[1]: sshd@2-10.128.0.49:22-147.75.109.163:46956.service: Deactivated successfully. Jan 13 21:22:35.211850 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:22:35.214302 systemd-logind[1457]: Removed session 3. Jan 13 21:22:36.001908 kubelet[1648]: E0113 21:22:36.001748 1648 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:22:36.004035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:22:36.004321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:22:36.004741 systemd[1]: kubelet.service: Consumed 1.310s CPU time. Jan 13 21:22:45.260688 systemd[1]: Started sshd@3-10.128.0.49:22-147.75.109.163:59040.service - OpenSSH per-connection server daemon (147.75.109.163:59040). Jan 13 21:22:45.545221 sshd[1666]: Accepted publickey for core from 147.75.109.163 port 59040 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:45.547118 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:45.553356 systemd-logind[1457]: New session 4 of user core. Jan 13 21:22:45.563521 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:22:45.761079 sshd[1666]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:45.766081 systemd[1]: sshd@3-10.128.0.49:22-147.75.109.163:59040.service: Deactivated successfully. Jan 13 21:22:45.768164 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:22:45.769094 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:22:45.770451 systemd-logind[1457]: Removed session 4. Jan 13 21:22:45.818656 systemd[1]: Started sshd@4-10.128.0.49:22-147.75.109.163:59054.service - OpenSSH per-connection server daemon (147.75.109.163:59054). Jan 13 21:22:46.058775 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:22:46.066558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:46.119059 sshd[1673]: Accepted publickey for core from 147.75.109.163 port 59054 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:46.121095 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:46.130124 systemd-logind[1457]: New session 5 of user core. Jan 13 21:22:46.142517 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:22:46.318538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:46.329337 sshd[1673]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:46.330120 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:22:46.337636 systemd[1]: sshd@4-10.128.0.49:22-147.75.109.163:59054.service: Deactivated successfully. Jan 13 21:22:46.341743 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:22:46.343433 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:22:46.345646 systemd-logind[1457]: Removed session 5. Jan 13 21:22:46.389437 systemd[1]: Started sshd@5-10.128.0.49:22-147.75.109.163:59070.service - OpenSSH per-connection server daemon (147.75.109.163:59070). Jan 13 21:22:46.401441 kubelet[1685]: E0113 21:22:46.401397 1685 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:22:46.408026 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:22:46.408255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:22:46.682999 sshd[1695]: Accepted publickey for core from 147.75.109.163 port 59070 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:46.684551 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:46.690899 systemd-logind[1457]: New session 6 of user core. Jan 13 21:22:46.698482 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:22:46.897960 sshd[1695]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:46.901992 systemd[1]: sshd@5-10.128.0.49:22-147.75.109.163:59070.service: Deactivated successfully. Jan 13 21:22:46.904114 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:22:46.905724 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:22:46.907071 systemd-logind[1457]: Removed session 6. Jan 13 21:22:46.955662 systemd[1]: Started sshd@6-10.128.0.49:22-147.75.109.163:59082.service - OpenSSH per-connection server daemon (147.75.109.163:59082). Jan 13 21:22:47.235069 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 59082 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:47.236848 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:47.242987 systemd-logind[1457]: New session 7 of user core. Jan 13 21:22:47.252509 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:22:47.425893 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:22:47.426419 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:22:47.443016 sudo[1706]: pam_unix(sudo:session): session closed for user root Jan 13 21:22:47.485433 sshd[1703]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:47.490189 systemd[1]: sshd@6-10.128.0.49:22-147.75.109.163:59082.service: Deactivated successfully. Jan 13 21:22:47.492441 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:22:47.494258 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:22:47.495765 systemd-logind[1457]: Removed session 7. Jan 13 21:22:47.543651 systemd[1]: Started sshd@7-10.128.0.49:22-147.75.109.163:42432.service - OpenSSH per-connection server daemon (147.75.109.163:42432). Jan 13 21:22:47.832263 sshd[1711]: Accepted publickey for core from 147.75.109.163 port 42432 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:47.834145 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:47.840274 systemd-logind[1457]: New session 8 of user core. Jan 13 21:22:47.850494 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:22:48.013889 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:22:48.014384 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:22:48.019261 sudo[1715]: pam_unix(sudo:session): session closed for user root Jan 13 21:22:48.032068 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:22:48.032558 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:22:48.047775 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:22:48.051916 auditctl[1718]: No rules Jan 13 21:22:48.052427 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:22:48.052688 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:22:48.055865 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:22:48.100635 augenrules[1736]: No rules Jan 13 21:22:48.102408 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:22:48.104503 sudo[1714]: pam_unix(sudo:session): session closed for user root Jan 13 21:22:48.148376 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:48.153506 systemd[1]: sshd@7-10.128.0.49:22-147.75.109.163:42432.service: Deactivated successfully. Jan 13 21:22:48.155746 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:22:48.156765 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:22:48.158448 systemd-logind[1457]: Removed session 8. Jan 13 21:22:48.203686 systemd[1]: Started sshd@8-10.128.0.49:22-147.75.109.163:42444.service - OpenSSH per-connection server daemon (147.75.109.163:42444). Jan 13 21:22:48.497910 sshd[1744]: Accepted publickey for core from 147.75.109.163 port 42444 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:48.499711 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:48.505914 systemd-logind[1457]: New session 9 of user core. Jan 13 21:22:48.515473 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:22:48.678216 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:22:48.678731 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:22:49.123695 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:22:49.136835 (dockerd)[1762]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:22:49.576748 dockerd[1762]: time="2025-01-13T21:22:49.576645360Z" level=info msg="Starting up" Jan 13 21:22:49.717603 dockerd[1762]: time="2025-01-13T21:22:49.717548876Z" level=info msg="Loading containers: start." Jan 13 21:22:49.859314 kernel: Initializing XFRM netlink socket Jan 13 21:22:49.969094 systemd-networkd[1386]: docker0: Link UP Jan 13 21:22:49.990187 dockerd[1762]: time="2025-01-13T21:22:49.990113732Z" level=info msg="Loading containers: done." Jan 13 21:22:50.008801 dockerd[1762]: time="2025-01-13T21:22:50.008735801Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:22:50.008975 dockerd[1762]: time="2025-01-13T21:22:50.008861223Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:22:50.009036 dockerd[1762]: time="2025-01-13T21:22:50.009013363Z" level=info msg="Daemon has completed initialization" Jan 13 21:22:50.045884 dockerd[1762]: time="2025-01-13T21:22:50.045488932Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:22:50.045719 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:22:51.127131 containerd[1476]: time="2025-01-13T21:22:51.127069452Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:22:51.589899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370119246.mount: Deactivated successfully. Jan 13 21:22:53.345930 containerd[1476]: time="2025-01-13T21:22:53.345836236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:53.347551 containerd[1476]: time="2025-01-13T21:22:53.347492104Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35145882" Jan 13 21:22:53.348508 containerd[1476]: time="2025-01-13T21:22:53.348443042Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:53.351939 containerd[1476]: time="2025-01-13T21:22:53.351859132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:53.353816 containerd[1476]: time="2025-01-13T21:22:53.353501053Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.226376102s" Jan 13 21:22:53.353816 containerd[1476]: time="2025-01-13T21:22:53.353554272Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 21:22:53.384044 containerd[1476]: time="2025-01-13T21:22:53.384001855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:22:55.068142 containerd[1476]: time="2025-01-13T21:22:55.068071875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:55.069710 containerd[1476]: time="2025-01-13T21:22:55.069643514Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32219666" Jan 13 21:22:55.070767 containerd[1476]: time="2025-01-13T21:22:55.070695004Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:55.074175 containerd[1476]: time="2025-01-13T21:22:55.074113212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:55.075699 containerd[1476]: time="2025-01-13T21:22:55.075531957Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.691426301s" Jan 13 21:22:55.075699 containerd[1476]: time="2025-01-13T21:22:55.075578444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 21:22:55.107369 containerd[1476]: time="2025-01-13T21:22:55.107323116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:22:56.127725 containerd[1476]: time="2025-01-13T21:22:56.127657114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:56.129262 containerd[1476]: time="2025-01-13T21:22:56.129202078Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17334738" Jan 13 21:22:56.130305 containerd[1476]: time="2025-01-13T21:22:56.130212899Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:56.134171 containerd[1476]: time="2025-01-13T21:22:56.134066306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:56.135701 containerd[1476]: time="2025-01-13T21:22:56.135526420Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.028148317s" Jan 13 21:22:56.135701 containerd[1476]: time="2025-01-13T21:22:56.135575004Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 21:22:56.167122 containerd[1476]: time="2025-01-13T21:22:56.167059432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:22:56.485613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:22:56.498597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:56.803604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:56.818810 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:22:56.910307 kubelet[1991]: E0113 21:22:56.910232 1991 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:22:56.913571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:22:56.913802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:22:57.370816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535372180.mount: Deactivated successfully. Jan 13 21:22:57.907556 containerd[1476]: time="2025-01-13T21:22:57.907491786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:57.908821 containerd[1476]: time="2025-01-13T21:22:57.908751621Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28621853" Jan 13 21:22:57.910175 containerd[1476]: time="2025-01-13T21:22:57.910104694Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:57.912827 containerd[1476]: time="2025-01-13T21:22:57.912761070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:57.914093 containerd[1476]: time="2025-01-13T21:22:57.913679985Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.746567314s" Jan 13 21:22:57.914093 containerd[1476]: time="2025-01-13T21:22:57.913727901Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 21:22:57.942675 containerd[1476]: time="2025-01-13T21:22:57.942620497Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:22:58.341039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811444048.mount: Deactivated successfully. Jan 13 21:22:59.438433 containerd[1476]: time="2025-01-13T21:22:59.438364080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:59.440190 containerd[1476]: time="2025-01-13T21:22:59.440128375Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 13 21:22:59.441096 containerd[1476]: time="2025-01-13T21:22:59.441027137Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:59.445805 containerd[1476]: time="2025-01-13T21:22:59.445745063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:59.447777 containerd[1476]: time="2025-01-13T21:22:59.447728805Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.505057286s" Jan 13 21:22:59.447880 containerd[1476]: time="2025-01-13T21:22:59.447782760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:22:59.480704 containerd[1476]: time="2025-01-13T21:22:59.480404797Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:22:59.818922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864708388.mount: Deactivated successfully. Jan 13 21:22:59.823945 containerd[1476]: time="2025-01-13T21:22:59.823884870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:59.825143 containerd[1476]: time="2025-01-13T21:22:59.825089937Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Jan 13 21:22:59.826180 containerd[1476]: time="2025-01-13T21:22:59.826107692Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:59.829076 containerd[1476]: time="2025-01-13T21:22:59.829012667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:59.831087 containerd[1476]: time="2025-01-13T21:22:59.830133099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 349.667083ms" Jan 13 21:22:59.831087 containerd[1476]: time="2025-01-13T21:22:59.830178755Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:22:59.861586 containerd[1476]: time="2025-01-13T21:22:59.861538411Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:23:00.246046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4035367268.mount: Deactivated successfully. Jan 13 21:23:02.334225 containerd[1476]: time="2025-01-13T21:23:02.334159319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:02.335890 containerd[1476]: time="2025-01-13T21:23:02.335828875Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56659115" Jan 13 21:23:02.337069 containerd[1476]: time="2025-01-13T21:23:02.336997259Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:02.340621 containerd[1476]: time="2025-01-13T21:23:02.340561914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:02.342606 containerd[1476]: time="2025-01-13T21:23:02.342130294Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.48053696s" Jan 13 21:23:02.342606 containerd[1476]: time="2025-01-13T21:23:02.342178368Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 21:23:03.387099 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:23:05.342904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:05.355658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:05.388120 systemd[1]: Reloading requested from client PID 2184 ('systemctl') (unit session-9.scope)... Jan 13 21:23:05.388148 systemd[1]: Reloading... Jan 13 21:23:05.527308 zram_generator::config[2224]: No configuration found. Jan 13 21:23:05.684386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:23:05.782746 systemd[1]: Reloading finished in 393 ms. Jan 13 21:23:05.841495 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:23:05.841627 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:23:05.842005 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:05.844798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:06.069226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:06.081812 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:23:06.145377 kubelet[2276]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:23:06.145377 kubelet[2276]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:23:06.145377 kubelet[2276]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:23:06.145903 kubelet[2276]: I0113 21:23:06.145462 2276 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:23:06.844314 kubelet[2276]: I0113 21:23:06.843646 2276 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:23:06.844314 kubelet[2276]: I0113 21:23:06.843686 2276 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:23:06.844314 kubelet[2276]: I0113 21:23:06.844226 2276 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:23:06.871768 kubelet[2276]: E0113 21:23:06.871736 2276 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:06.872807 kubelet[2276]: I0113 21:23:06.872773 2276 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:23:06.886797 kubelet[2276]: I0113 21:23:06.886759 2276 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:23:06.887324 kubelet[2276]: I0113 21:23:06.887272 2276 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:23:06.887570 kubelet[2276]: I0113 21:23:06.887534 2276 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:23:06.887570 kubelet[2276]: I0113 21:23:06.887568 2276 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:23:06.887807 kubelet[2276]: I0113 21:23:06.887586 2276 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:23:06.887807 kubelet[2276]: I0113 21:23:06.887726 2276 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:23:06.887910 kubelet[2276]: I0113 21:23:06.887864 2276 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:23:06.887910 kubelet[2276]: I0113 21:23:06.887886 2276 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:23:06.887991 kubelet[2276]: I0113 21:23:06.887930 2276 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:23:06.887991 kubelet[2276]: I0113 21:23:06.887949 2276 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:23:06.891048 kubelet[2276]: W0113 21:23:06.890366 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:06.891048 kubelet[2276]: E0113 21:23:06.890449 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:06.891048 kubelet[2276]: W0113 21:23:06.890787 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:06.891048 kubelet[2276]: E0113 21:23:06.890840 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:06.891704 kubelet[2276]: I0113 21:23:06.891679 2276 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:23:06.896447 kubelet[2276]: I0113 21:23:06.896398 2276 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:23:06.896529 kubelet[2276]: W0113 21:23:06.896497 2276 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:23:06.897479 kubelet[2276]: I0113 21:23:06.897256 2276 server.go:1256] "Started kubelet" Jan 13 21:23:06.898681 kubelet[2276]: I0113 21:23:06.898637 2276 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:23:06.906634 kubelet[2276]: I0113 21:23:06.906591 2276 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:23:06.908366 kubelet[2276]: I0113 21:23:06.907856 2276 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:23:06.909473 kubelet[2276]: I0113 21:23:06.909449 2276 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:23:06.909720 kubelet[2276]: I0113 21:23:06.909684 2276 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:23:06.912667 kubelet[2276]: E0113 21:23:06.911718 2276 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal.181a5d7a78b4327e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,UID:ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:23:06.897216126 +0000 UTC m=+0.810003019,LastTimestamp:2025-01-13 21:23:06.897216126 +0000 UTC m=+0.810003019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,}" Jan 13 21:23:06.915504 kubelet[2276]: I0113 21:23:06.915475 2276 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:23:06.915864 kubelet[2276]: I0113 21:23:06.915840 2276 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:23:06.915989 kubelet[2276]: I0113 21:23:06.915971 2276 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:23:06.917013 kubelet[2276]: W0113 21:23:06.916953 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:06.917103 kubelet[2276]: E0113 21:23:06.917027 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:06.917168 kubelet[2276]: E0113 21:23:06.917143 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="200ms" Jan 13 21:23:06.924402 kubelet[2276]: I0113 21:23:06.924371 2276 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:23:06.924402 kubelet[2276]: I0113 21:23:06.924401 2276 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:23:06.924542 kubelet[2276]: I0113 21:23:06.924506 2276 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:23:06.940219 kubelet[2276]: E0113 21:23:06.939977 2276 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:23:06.940219 kubelet[2276]: I0113 21:23:06.940085 2276 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:23:06.941825 kubelet[2276]: I0113 21:23:06.941783 2276 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:23:06.941825 kubelet[2276]: I0113 21:23:06.941815 2276 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:23:06.941973 kubelet[2276]: I0113 21:23:06.941839 2276 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:23:06.941973 kubelet[2276]: E0113 21:23:06.941909 2276 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:23:06.950527 kubelet[2276]: W0113 21:23:06.950458 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:06.950527 kubelet[2276]: E0113 21:23:06.950503 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:06.960855 kubelet[2276]: I0113 21:23:06.960784 2276 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:23:06.960855 kubelet[2276]: I0113 21:23:06.960807 2276 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:23:06.961010 kubelet[2276]: I0113 21:23:06.960864 2276 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:23:06.965536 kubelet[2276]: I0113 21:23:06.965496 2276 policy_none.go:49] "None policy: Start" Jan 13 21:23:06.966188 kubelet[2276]: I0113 21:23:06.966168 2276 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:23:06.966334 kubelet[2276]: I0113 21:23:06.966309 2276 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:23:06.975940 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:23:06.986353 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:23:06.990869 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:23:07.002487 kubelet[2276]: I0113 21:23:07.002201 2276 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:23:07.002833 kubelet[2276]: I0113 21:23:07.002579 2276 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:23:07.005516 kubelet[2276]: E0113 21:23:07.005247 2276 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" not found" Jan 13 21:23:07.023022 kubelet[2276]: I0113 21:23:07.022998 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.023450 kubelet[2276]: E0113 21:23:07.023416 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.042587 kubelet[2276]: I0113 21:23:07.042551 2276 topology_manager.go:215] "Topology Admit Handler" podUID="4d55d8d6df5e2676e54b6f296ffd75ec" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.050977 kubelet[2276]: I0113 21:23:07.050927 2276 topology_manager.go:215] "Topology Admit Handler" podUID="f1f2c6fba66a1913071531d921368ceb" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.055847 kubelet[2276]: I0113 21:23:07.055568 2276 topology_manager.go:215] "Topology Admit Handler" podUID="678eab33ba46b9a64b5fcb4b71a5d277" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.061279 systemd[1]: Created slice kubepods-burstable-pod4d55d8d6df5e2676e54b6f296ffd75ec.slice - libcontainer container kubepods-burstable-pod4d55d8d6df5e2676e54b6f296ffd75ec.slice. Jan 13 21:23:07.076166 systemd[1]: Created slice kubepods-burstable-podf1f2c6fba66a1913071531d921368ceb.slice - libcontainer container kubepods-burstable-podf1f2c6fba66a1913071531d921368ceb.slice. Jan 13 21:23:07.082777 systemd[1]: Created slice kubepods-burstable-pod678eab33ba46b9a64b5fcb4b71a5d277.slice - libcontainer container kubepods-burstable-pod678eab33ba46b9a64b5fcb4b71a5d277.slice. Jan 13 21:23:07.117609 kubelet[2276]: I0113 21:23:07.117163 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1f2c6fba66a1913071531d921368ceb-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"f1f2c6fba66a1913071531d921368ceb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.117609 kubelet[2276]: I0113 21:23:07.117223 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1f2c6fba66a1913071531d921368ceb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"f1f2c6fba66a1913071531d921368ceb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.117609 kubelet[2276]: I0113 21:23:07.117260 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/678eab33ba46b9a64b5fcb4b71a5d277-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"678eab33ba46b9a64b5fcb4b71a5d277\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.117609 kubelet[2276]: I0113 21:23:07.117313 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/678eab33ba46b9a64b5fcb4b71a5d277-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"678eab33ba46b9a64b5fcb4b71a5d277\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.117872 kubelet[2276]: I0113 21:23:07.117351 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/678eab33ba46b9a64b5fcb4b71a5d277-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"678eab33ba46b9a64b5fcb4b71a5d277\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.117872 kubelet[2276]: I0113 21:23:07.117385 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d55d8d6df5e2676e54b6f296ffd75ec-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"4d55d8d6df5e2676e54b6f296ffd75ec\") " pod="kube-system/kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.117872 kubelet[2276]: I0113 21:23:07.117421 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1f2c6fba66a1913071531d921368ceb-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"f1f2c6fba66a1913071531d921368ceb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.117872 kubelet[2276]: I0113 21:23:07.117475 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/678eab33ba46b9a64b5fcb4b71a5d277-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"678eab33ba46b9a64b5fcb4b71a5d277\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.118071 kubelet[2276]: I0113 21:23:07.117509 2276 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/678eab33ba46b9a64b5fcb4b71a5d277-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"678eab33ba46b9a64b5fcb4b71a5d277\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.118386 kubelet[2276]: E0113 21:23:07.118323 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="400ms" Jan 13 21:23:07.228229 kubelet[2276]: I0113 21:23:07.228178 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.228729 kubelet[2276]: E0113 21:23:07.228561 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.371329 containerd[1476]: time="2025-01-13T21:23:07.371162452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,Uid:4d55d8d6df5e2676e54b6f296ffd75ec,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:07.384737 containerd[1476]: time="2025-01-13T21:23:07.384681333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,Uid:f1f2c6fba66a1913071531d921368ceb,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:07.386411 containerd[1476]: time="2025-01-13T21:23:07.386250120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,Uid:678eab33ba46b9a64b5fcb4b71a5d277,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:07.519463 kubelet[2276]: E0113 21:23:07.519415 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="800ms" Jan 13 21:23:07.655254 kubelet[2276]: I0113 21:23:07.655105 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.655830 kubelet[2276]: E0113 21:23:07.655796 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:07.734516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766942241.mount: Deactivated successfully. Jan 13 21:23:07.743149 containerd[1476]: time="2025-01-13T21:23:07.743079072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:23:07.744342 containerd[1476]: time="2025-01-13T21:23:07.744280064Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:23:07.745469 containerd[1476]: time="2025-01-13T21:23:07.745409104Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 13 21:23:07.746319 containerd[1476]: time="2025-01-13T21:23:07.746249429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:23:07.747937 containerd[1476]: time="2025-01-13T21:23:07.747898506Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:23:07.749477 containerd[1476]: time="2025-01-13T21:23:07.749412760Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:23:07.749996 containerd[1476]: time="2025-01-13T21:23:07.749833941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:23:07.752551 containerd[1476]: time="2025-01-13T21:23:07.752508809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:23:07.755859 containerd[1476]: time="2025-01-13T21:23:07.755245938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 370.472407ms" Jan 13 21:23:07.757185 containerd[1476]: time="2025-01-13T21:23:07.756886514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 385.605301ms" Jan 13 21:23:07.759771 containerd[1476]: time="2025-01-13T21:23:07.759496564Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 373.138584ms" Jan 13 21:23:07.969416 kubelet[2276]: W0113 21:23:07.968847 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:07.969416 kubelet[2276]: E0113 21:23:07.968920 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:07.974901 containerd[1476]: time="2025-01-13T21:23:07.974739181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:07.975371 containerd[1476]: time="2025-01-13T21:23:07.974946654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:07.978406 containerd[1476]: time="2025-01-13T21:23:07.978082420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:07.978406 containerd[1476]: time="2025-01-13T21:23:07.978199212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:07.978667 containerd[1476]: time="2025-01-13T21:23:07.977681276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:07.978667 containerd[1476]: time="2025-01-13T21:23:07.977744550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:07.978667 containerd[1476]: time="2025-01-13T21:23:07.977787748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:07.978667 containerd[1476]: time="2025-01-13T21:23:07.977908024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:07.982530 containerd[1476]: time="2025-01-13T21:23:07.982161511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:07.982530 containerd[1476]: time="2025-01-13T21:23:07.982249208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:07.982530 containerd[1476]: time="2025-01-13T21:23:07.982278956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:07.982530 containerd[1476]: time="2025-01-13T21:23:07.982420031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:08.033517 systemd[1]: Started cri-containerd-4d4765f6b6dfab47e7205f2f1bdabbfe2efbc4c18d6dd3158a93b0e48d0efe25.scope - libcontainer container 4d4765f6b6dfab47e7205f2f1bdabbfe2efbc4c18d6dd3158a93b0e48d0efe25. Jan 13 21:23:08.035667 systemd[1]: Started cri-containerd-b5bd3de32ef4e95ed69bf9b5a0d4de9fd162fd96e45be184e2ec96293306e73a.scope - libcontainer container b5bd3de32ef4e95ed69bf9b5a0d4de9fd162fd96e45be184e2ec96293306e73a. Jan 13 21:23:08.049772 systemd[1]: Started cri-containerd-9a823db0e252973799fa21c20a3e201078e4e5b9f3a8a4e276d56df24696eda5.scope - libcontainer container 9a823db0e252973799fa21c20a3e201078e4e5b9f3a8a4e276d56df24696eda5. Jan 13 21:23:08.125333 containerd[1476]: time="2025-01-13T21:23:08.125090607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,Uid:f1f2c6fba66a1913071531d921368ceb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5bd3de32ef4e95ed69bf9b5a0d4de9fd162fd96e45be184e2ec96293306e73a\"" Jan 13 21:23:08.129924 kubelet[2276]: E0113 21:23:08.129887 2276 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-21291" Jan 13 21:23:08.135934 containerd[1476]: time="2025-01-13T21:23:08.135893143Z" level=info msg="CreateContainer within sandbox \"b5bd3de32ef4e95ed69bf9b5a0d4de9fd162fd96e45be184e2ec96293306e73a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:23:08.157605 containerd[1476]: time="2025-01-13T21:23:08.157361292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,Uid:4d55d8d6df5e2676e54b6f296ffd75ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d4765f6b6dfab47e7205f2f1bdabbfe2efbc4c18d6dd3158a93b0e48d0efe25\"" Jan 13 21:23:08.162333 kubelet[2276]: E0113 21:23:08.161982 2276 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-21291" Jan 13 21:23:08.164230 containerd[1476]: time="2025-01-13T21:23:08.164196676Z" level=info msg="CreateContainer within sandbox \"4d4765f6b6dfab47e7205f2f1bdabbfe2efbc4c18d6dd3158a93b0e48d0efe25\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:23:08.173644 containerd[1476]: time="2025-01-13T21:23:08.173606158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,Uid:678eab33ba46b9a64b5fcb4b71a5d277,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a823db0e252973799fa21c20a3e201078e4e5b9f3a8a4e276d56df24696eda5\"" Jan 13 21:23:08.174946 containerd[1476]: time="2025-01-13T21:23:08.174910803Z" level=info msg="CreateContainer within sandbox \"b5bd3de32ef4e95ed69bf9b5a0d4de9fd162fd96e45be184e2ec96293306e73a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b89355b18a1eee001f0acfd301b3210ae3c1cbb888134205dc01cffd44bafb19\"" Jan 13 21:23:08.175501 kubelet[2276]: E0113 21:23:08.175324 2276 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flat" Jan 13 21:23:08.175771 containerd[1476]: time="2025-01-13T21:23:08.175693515Z" level=info msg="StartContainer for \"b89355b18a1eee001f0acfd301b3210ae3c1cbb888134205dc01cffd44bafb19\"" Jan 13 21:23:08.179165 containerd[1476]: time="2025-01-13T21:23:08.179123722Z" level=info msg="CreateContainer within sandbox \"9a823db0e252973799fa21c20a3e201078e4e5b9f3a8a4e276d56df24696eda5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:23:08.184834 containerd[1476]: time="2025-01-13T21:23:08.184730217Z" level=info msg="CreateContainer within sandbox \"4d4765f6b6dfab47e7205f2f1bdabbfe2efbc4c18d6dd3158a93b0e48d0efe25\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"10d2485c5695bb4877fb0789af006f9dab4db4d13bcb06b784d239136d08afb7\"" Jan 13 21:23:08.186960 containerd[1476]: time="2025-01-13T21:23:08.186012972Z" level=info msg="StartContainer for \"10d2485c5695bb4877fb0789af006f9dab4db4d13bcb06b784d239136d08afb7\"" Jan 13 21:23:08.203367 containerd[1476]: time="2025-01-13T21:23:08.203325666Z" level=info msg="CreateContainer within sandbox \"9a823db0e252973799fa21c20a3e201078e4e5b9f3a8a4e276d56df24696eda5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61eba691dd9c8c512aabf099e79b337ce8eca5006d6c9ab5aa9b521855ad0da2\"" Jan 13 21:23:08.205179 containerd[1476]: time="2025-01-13T21:23:08.205147419Z" level=info msg="StartContainer for \"61eba691dd9c8c512aabf099e79b337ce8eca5006d6c9ab5aa9b521855ad0da2\"" Jan 13 21:23:08.219239 kubelet[2276]: E0113 21:23:08.219214 2276 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal.181a5d7a78b4327e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,UID:ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:23:06.897216126 +0000 UTC m=+0.810003019,LastTimestamp:2025-01-13 21:23:06.897216126 +0000 UTC m=+0.810003019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal,}" Jan 13 21:23:08.224484 systemd[1]: Started cri-containerd-b89355b18a1eee001f0acfd301b3210ae3c1cbb888134205dc01cffd44bafb19.scope - libcontainer container b89355b18a1eee001f0acfd301b3210ae3c1cbb888134205dc01cffd44bafb19. Jan 13 21:23:08.232140 kubelet[2276]: W0113 21:23:08.232048 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:08.232567 kubelet[2276]: E0113 21:23:08.232152 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:08.251390 kubelet[2276]: W0113 21:23:08.249255 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:08.251642 kubelet[2276]: E0113 21:23:08.251617 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:08.266609 systemd[1]: Started cri-containerd-10d2485c5695bb4877fb0789af006f9dab4db4d13bcb06b784d239136d08afb7.scope - libcontainer container 10d2485c5695bb4877fb0789af006f9dab4db4d13bcb06b784d239136d08afb7. Jan 13 21:23:08.283517 systemd[1]: Started cri-containerd-61eba691dd9c8c512aabf099e79b337ce8eca5006d6c9ab5aa9b521855ad0da2.scope - libcontainer container 61eba691dd9c8c512aabf099e79b337ce8eca5006d6c9ab5aa9b521855ad0da2. Jan 13 21:23:08.320323 kubelet[2276]: E0113 21:23:08.320270 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="1.6s" Jan 13 21:23:08.332948 containerd[1476]: time="2025-01-13T21:23:08.332721432Z" level=info msg="StartContainer for \"b89355b18a1eee001f0acfd301b3210ae3c1cbb888134205dc01cffd44bafb19\" returns successfully" Jan 13 21:23:08.380993 containerd[1476]: time="2025-01-13T21:23:08.380512813Z" level=info msg="StartContainer for \"10d2485c5695bb4877fb0789af006f9dab4db4d13bcb06b784d239136d08afb7\" returns successfully" Jan 13 21:23:08.401751 containerd[1476]: time="2025-01-13T21:23:08.401446202Z" level=info msg="StartContainer for \"61eba691dd9c8c512aabf099e79b337ce8eca5006d6c9ab5aa9b521855ad0da2\" returns successfully" Jan 13 21:23:08.460612 kubelet[2276]: I0113 21:23:08.460570 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:08.461602 kubelet[2276]: E0113 21:23:08.461569 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:08.466991 kubelet[2276]: W0113 21:23:08.466924 2276 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:08.467082 kubelet[2276]: E0113 21:23:08.467005 2276 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Jan 13 21:23:10.067275 kubelet[2276]: I0113 21:23:10.067235 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:11.223643 kubelet[2276]: E0113 21:23:11.223587 2276 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:11.259416 kubelet[2276]: I0113 21:23:11.259353 2276 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:11.358422 kubelet[2276]: E0113 21:23:11.358380 2276 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:11.892728 kubelet[2276]: I0113 21:23:11.892683 2276 apiserver.go:52] "Watching apiserver" Jan 13 21:23:11.916950 kubelet[2276]: I0113 21:23:11.916907 2276 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:23:14.240674 systemd[1]: Reloading requested from client PID 2550 ('systemctl') (unit session-9.scope)... Jan 13 21:23:14.240698 systemd[1]: Reloading... Jan 13 21:23:14.356453 zram_generator::config[2589]: No configuration found. Jan 13 21:23:14.517618 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:23:14.643225 systemd[1]: Reloading finished in 401 ms. Jan 13 21:23:14.703612 kubelet[2276]: I0113 21:23:14.703493 2276 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:23:14.703923 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:14.726021 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:23:14.726397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:14.726484 systemd[1]: kubelet.service: Consumed 1.284s CPU time, 115.5M memory peak, 0B memory swap peak. Jan 13 21:23:14.733646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:14.984387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:14.996198 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:23:15.107330 kubelet[2638]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:23:15.107330 kubelet[2638]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:23:15.107330 kubelet[2638]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:23:15.107330 kubelet[2638]: I0113 21:23:15.106234 2638 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:23:15.116887 kubelet[2638]: I0113 21:23:15.116846 2638 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:23:15.117066 kubelet[2638]: I0113 21:23:15.117049 2638 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:23:15.118519 kubelet[2638]: I0113 21:23:15.118477 2638 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:23:15.121993 kubelet[2638]: I0113 21:23:15.121971 2638 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:23:15.126325 kubelet[2638]: I0113 21:23:15.126253 2638 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:23:15.146157 kubelet[2638]: I0113 21:23:15.146124 2638 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:23:15.146802 kubelet[2638]: I0113 21:23:15.146596 2638 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:23:15.146916 kubelet[2638]: I0113 21:23:15.146862 2638 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:23:15.147078 kubelet[2638]: I0113 21:23:15.146929 2638 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:23:15.147078 kubelet[2638]: I0113 21:23:15.146947 2638 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:23:15.147078 kubelet[2638]: I0113 21:23:15.146994 2638 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:23:15.151097 kubelet[2638]: I0113 21:23:15.147145 2638 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:23:15.151097 kubelet[2638]: I0113 21:23:15.147167 2638 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:23:15.151097 kubelet[2638]: I0113 21:23:15.147215 2638 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:23:15.151097 kubelet[2638]: I0113 21:23:15.147237 2638 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:23:15.152820 kubelet[2638]: I0113 21:23:15.152795 2638 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:23:15.153930 kubelet[2638]: I0113 21:23:15.153677 2638 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:23:15.155946 kubelet[2638]: I0113 21:23:15.155076 2638 server.go:1256] "Started kubelet" Jan 13 21:23:15.160339 kubelet[2638]: I0113 21:23:15.159575 2638 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:23:15.160956 kubelet[2638]: I0113 21:23:15.160807 2638 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:23:15.162823 kubelet[2638]: I0113 21:23:15.162798 2638 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:23:15.164753 kubelet[2638]: I0113 21:23:15.164218 2638 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:23:15.165073 kubelet[2638]: I0113 21:23:15.163893 2638 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:23:15.170512 kubelet[2638]: I0113 21:23:15.169138 2638 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:23:15.171633 kubelet[2638]: I0113 21:23:15.171589 2638 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:23:15.172971 kubelet[2638]: I0113 21:23:15.172945 2638 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:23:15.204212 kubelet[2638]: I0113 21:23:15.203099 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:23:15.205521 kubelet[2638]: E0113 21:23:15.204418 2638 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:23:15.205521 kubelet[2638]: I0113 21:23:15.204653 2638 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:23:15.205521 kubelet[2638]: I0113 21:23:15.204771 2638 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:23:15.209459 kubelet[2638]: I0113 21:23:15.209430 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:23:15.209563 kubelet[2638]: I0113 21:23:15.209481 2638 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:23:15.209563 kubelet[2638]: I0113 21:23:15.209508 2638 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:23:15.209671 kubelet[2638]: E0113 21:23:15.209624 2638 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:23:15.240609 kubelet[2638]: I0113 21:23:15.239269 2638 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:23:15.296552 kubelet[2638]: I0113 21:23:15.295995 2638 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.310429 kubelet[2638]: E0113 21:23:15.309868 2638 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:23:15.315567 kubelet[2638]: I0113 21:23:15.314403 2638 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.315567 kubelet[2638]: I0113 21:23:15.314500 2638 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.362099 kubelet[2638]: I0113 21:23:15.362069 2638 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:23:15.362256 kubelet[2638]: I0113 21:23:15.362161 2638 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:23:15.362256 kubelet[2638]: I0113 21:23:15.362192 2638 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:23:15.362432 kubelet[2638]: I0113 21:23:15.362421 2638 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:23:15.362484 kubelet[2638]: I0113 21:23:15.362455 2638 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:23:15.362484 kubelet[2638]: I0113 21:23:15.362469 2638 policy_none.go:49] "None policy: Start" Jan 13 21:23:15.364206 kubelet[2638]: I0113 21:23:15.363371 2638 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:23:15.364206 kubelet[2638]: I0113 21:23:15.363411 2638 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:23:15.364206 kubelet[2638]: I0113 21:23:15.363620 2638 state_mem.go:75] "Updated machine memory state" Jan 13 21:23:15.371718 kubelet[2638]: I0113 21:23:15.371681 2638 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:23:15.372811 kubelet[2638]: I0113 21:23:15.372574 2638 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:23:15.510974 kubelet[2638]: I0113 21:23:15.510850 2638 topology_manager.go:215] "Topology Admit Handler" podUID="f1f2c6fba66a1913071531d921368ceb" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.511874 kubelet[2638]: I0113 21:23:15.511814 2638 topology_manager.go:215] "Topology Admit Handler" podUID="678eab33ba46b9a64b5fcb4b71a5d277" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.512014 kubelet[2638]: I0113 21:23:15.511896 2638 topology_manager.go:215] "Topology Admit Handler" podUID="4d55d8d6df5e2676e54b6f296ffd75ec" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.522397 kubelet[2638]: W0113 21:23:15.521201 2638 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:23:15.522397 kubelet[2638]: W0113 21:23:15.521547 2638 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:23:15.523492 kubelet[2638]: W0113 21:23:15.523468 2638 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:23:15.577873 kubelet[2638]: I0113 21:23:15.577830 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/678eab33ba46b9a64b5fcb4b71a5d277-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"678eab33ba46b9a64b5fcb4b71a5d277\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.578264 kubelet[2638]: I0113 21:23:15.578195 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/678eab33ba46b9a64b5fcb4b71a5d277-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"678eab33ba46b9a64b5fcb4b71a5d277\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.578409 kubelet[2638]: I0113 21:23:15.578312 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/678eab33ba46b9a64b5fcb4b71a5d277-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"678eab33ba46b9a64b5fcb4b71a5d277\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.578409 kubelet[2638]: I0113 21:23:15.578355 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d55d8d6df5e2676e54b6f296ffd75ec-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"4d55d8d6df5e2676e54b6f296ffd75ec\") " pod="kube-system/kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.578531 kubelet[2638]: I0113 21:23:15.578413 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1f2c6fba66a1913071531d921368ceb-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"f1f2c6fba66a1913071531d921368ceb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.578531 kubelet[2638]: I0113 21:23:15.578474 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1f2c6fba66a1913071531d921368ceb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"f1f2c6fba66a1913071531d921368ceb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.578531 kubelet[2638]: I0113 21:23:15.578513 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/678eab33ba46b9a64b5fcb4b71a5d277-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"678eab33ba46b9a64b5fcb4b71a5d277\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.578689 kubelet[2638]: I0113 21:23:15.578584 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/678eab33ba46b9a64b5fcb4b71a5d277-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"678eab33ba46b9a64b5fcb4b71a5d277\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:15.578689 kubelet[2638]: I0113 21:23:15.578652 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1f2c6fba66a1913071531d921368ceb-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" (UID: \"f1f2c6fba66a1913071531d921368ceb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:16.148693 kubelet[2638]: I0113 21:23:16.148639 2638 apiserver.go:52] "Watching apiserver" Jan 13 21:23:16.174315 kubelet[2638]: I0113 21:23:16.172369 2638 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:23:16.339323 kubelet[2638]: W0113 21:23:16.336464 2638 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:23:16.339323 kubelet[2638]: E0113 21:23:16.336563 2638 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:16.405177 kubelet[2638]: I0113 21:23:16.404957 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" podStartSLOduration=1.404894302 podStartE2EDuration="1.404894302s" podCreationTimestamp="2025-01-13 21:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:16.384047904 +0000 UTC m=+1.381266861" watchObservedRunningTime="2025-01-13 21:23:16.404894302 +0000 UTC m=+1.402113257" Jan 13 21:23:16.405177 kubelet[2638]: I0113 21:23:16.405161 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" podStartSLOduration=1.405126866 podStartE2EDuration="1.405126866s" podCreationTimestamp="2025-01-13 21:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:16.403692181 +0000 UTC m=+1.400911137" watchObservedRunningTime="2025-01-13 21:23:16.405126866 +0000 UTC m=+1.402345821" Jan 13 21:23:17.442407 update_engine[1461]: I20250113 21:23:17.442332 1461 update_attempter.cc:509] Updating boot flags... Jan 13 21:23:17.579673 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2691) Jan 13 21:23:17.825358 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2694) Jan 13 21:23:18.047428 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2694) Jan 13 21:23:19.548506 kubelet[2638]: I0113 21:23:19.548112 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" podStartSLOduration=4.54796009 podStartE2EDuration="4.54796009s" podCreationTimestamp="2025-01-13 21:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:16.428081514 +0000 UTC m=+1.425300470" watchObservedRunningTime="2025-01-13 21:23:19.54796009 +0000 UTC m=+4.545179041" Jan 13 21:23:21.097068 sudo[1747]: pam_unix(sudo:session): session closed for user root Jan 13 21:23:21.140183 sshd[1744]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:21.144922 systemd[1]: sshd@8-10.128.0.49:22-147.75.109.163:42444.service: Deactivated successfully. Jan 13 21:23:21.147757 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:23:21.148073 systemd[1]: session-9.scope: Consumed 5.994s CPU time, 193.0M memory peak, 0B memory swap peak. Jan 13 21:23:21.150156 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:23:21.151933 systemd-logind[1457]: Removed session 9. Jan 13 21:23:28.471181 kubelet[2638]: I0113 21:23:28.471138 2638 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:23:28.473254 containerd[1476]: time="2025-01-13T21:23:28.472535982Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:23:28.473746 kubelet[2638]: I0113 21:23:28.472848 2638 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:23:28.923436 kubelet[2638]: I0113 21:23:28.921927 2638 topology_manager.go:215] "Topology Admit Handler" podUID="95841e23-3b82-4a4c-9b01-97a9505adb78" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-z5x6j" Jan 13 21:23:28.939259 systemd[1]: Created slice kubepods-besteffort-pod95841e23_3b82_4a4c_9b01_97a9505adb78.slice - libcontainer container kubepods-besteffort-pod95841e23_3b82_4a4c_9b01_97a9505adb78.slice. Jan 13 21:23:28.972053 kubelet[2638]: I0113 21:23:28.972014 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95841e23-3b82-4a4c-9b01-97a9505adb78-var-lib-calico\") pod \"tigera-operator-c7ccbd65-z5x6j\" (UID: \"95841e23-3b82-4a4c-9b01-97a9505adb78\") " pod="tigera-operator/tigera-operator-c7ccbd65-z5x6j" Jan 13 21:23:28.972182 kubelet[2638]: I0113 21:23:28.972076 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msq7m\" (UniqueName: \"kubernetes.io/projected/95841e23-3b82-4a4c-9b01-97a9505adb78-kube-api-access-msq7m\") pod \"tigera-operator-c7ccbd65-z5x6j\" (UID: \"95841e23-3b82-4a4c-9b01-97a9505adb78\") " pod="tigera-operator/tigera-operator-c7ccbd65-z5x6j" Jan 13 21:23:29.237158 kubelet[2638]: I0113 21:23:29.237111 2638 topology_manager.go:215] "Topology Admit Handler" podUID="b0a6c955-583e-4501-a432-9d7e4f4d692e" podNamespace="kube-system" podName="kube-proxy-v7wt8" Jan 13 21:23:29.249548 systemd[1]: Created slice kubepods-besteffort-podb0a6c955_583e_4501_a432_9d7e4f4d692e.slice - libcontainer container kubepods-besteffort-podb0a6c955_583e_4501_a432_9d7e4f4d692e.slice. Jan 13 21:23:29.252773 containerd[1476]: time="2025-01-13T21:23:29.252724458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-z5x6j,Uid:95841e23-3b82-4a4c-9b01-97a9505adb78,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:23:29.274606 kubelet[2638]: I0113 21:23:29.274344 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whhc5\" (UniqueName: \"kubernetes.io/projected/b0a6c955-583e-4501-a432-9d7e4f4d692e-kube-api-access-whhc5\") pod \"kube-proxy-v7wt8\" (UID: \"b0a6c955-583e-4501-a432-9d7e4f4d692e\") " pod="kube-system/kube-proxy-v7wt8" Jan 13 21:23:29.274606 kubelet[2638]: I0113 21:23:29.274409 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0a6c955-583e-4501-a432-9d7e4f4d692e-xtables-lock\") pod \"kube-proxy-v7wt8\" (UID: \"b0a6c955-583e-4501-a432-9d7e4f4d692e\") " pod="kube-system/kube-proxy-v7wt8" Jan 13 21:23:29.274606 kubelet[2638]: I0113 21:23:29.274447 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b0a6c955-583e-4501-a432-9d7e4f4d692e-kube-proxy\") pod \"kube-proxy-v7wt8\" (UID: \"b0a6c955-583e-4501-a432-9d7e4f4d692e\") " pod="kube-system/kube-proxy-v7wt8" Jan 13 21:23:29.274606 kubelet[2638]: I0113 21:23:29.274480 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0a6c955-583e-4501-a432-9d7e4f4d692e-lib-modules\") pod \"kube-proxy-v7wt8\" (UID: \"b0a6c955-583e-4501-a432-9d7e4f4d692e\") " pod="kube-system/kube-proxy-v7wt8" Jan 13 21:23:29.290201 containerd[1476]: time="2025-01-13T21:23:29.290056458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:29.290201 containerd[1476]: time="2025-01-13T21:23:29.290162867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:29.291403 containerd[1476]: time="2025-01-13T21:23:29.290631622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:29.291707 containerd[1476]: time="2025-01-13T21:23:29.291637184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:29.320146 systemd[1]: run-containerd-runc-k8s.io-c5c128e3bf0ee84a47759aaee7c5d869d8b432182adb89068a09651c22cb8f2d-runc.w0o52p.mount: Deactivated successfully. Jan 13 21:23:29.329534 systemd[1]: Started cri-containerd-c5c128e3bf0ee84a47759aaee7c5d869d8b432182adb89068a09651c22cb8f2d.scope - libcontainer container c5c128e3bf0ee84a47759aaee7c5d869d8b432182adb89068a09651c22cb8f2d. Jan 13 21:23:29.384817 containerd[1476]: time="2025-01-13T21:23:29.384731742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-z5x6j,Uid:95841e23-3b82-4a4c-9b01-97a9505adb78,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c5c128e3bf0ee84a47759aaee7c5d869d8b432182adb89068a09651c22cb8f2d\"" Jan 13 21:23:29.387884 containerd[1476]: time="2025-01-13T21:23:29.387531022Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:23:29.555692 containerd[1476]: time="2025-01-13T21:23:29.555563753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7wt8,Uid:b0a6c955-583e-4501-a432-9d7e4f4d692e,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:29.586629 containerd[1476]: time="2025-01-13T21:23:29.586479102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:29.586629 containerd[1476]: time="2025-01-13T21:23:29.586546652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:29.586629 containerd[1476]: time="2025-01-13T21:23:29.586567286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:29.586889 containerd[1476]: time="2025-01-13T21:23:29.586686952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:29.610524 systemd[1]: Started cri-containerd-f1865a628b1891c3184392deaf2b096ab8243108dd79bbb87f1dc446caae2cfb.scope - libcontainer container f1865a628b1891c3184392deaf2b096ab8243108dd79bbb87f1dc446caae2cfb. Jan 13 21:23:29.643342 containerd[1476]: time="2025-01-13T21:23:29.643257411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7wt8,Uid:b0a6c955-583e-4501-a432-9d7e4f4d692e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1865a628b1891c3184392deaf2b096ab8243108dd79bbb87f1dc446caae2cfb\"" Jan 13 21:23:29.648155 containerd[1476]: time="2025-01-13T21:23:29.648025426Z" level=info msg="CreateContainer within sandbox \"f1865a628b1891c3184392deaf2b096ab8243108dd79bbb87f1dc446caae2cfb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:23:29.666436 containerd[1476]: time="2025-01-13T21:23:29.666388932Z" level=info msg="CreateContainer within sandbox \"f1865a628b1891c3184392deaf2b096ab8243108dd79bbb87f1dc446caae2cfb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bfd60178d66d7a275cc01dfaaa371dcdaa971c20e5bd44459d21523c4e08a876\"" Jan 13 21:23:29.668345 containerd[1476]: time="2025-01-13T21:23:29.666921548Z" level=info msg="StartContainer for \"bfd60178d66d7a275cc01dfaaa371dcdaa971c20e5bd44459d21523c4e08a876\"" Jan 13 21:23:29.702479 systemd[1]: Started cri-containerd-bfd60178d66d7a275cc01dfaaa371dcdaa971c20e5bd44459d21523c4e08a876.scope - libcontainer container bfd60178d66d7a275cc01dfaaa371dcdaa971c20e5bd44459d21523c4e08a876. Jan 13 21:23:29.743142 containerd[1476]: time="2025-01-13T21:23:29.742252589Z" level=info msg="StartContainer for \"bfd60178d66d7a275cc01dfaaa371dcdaa971c20e5bd44459d21523c4e08a876\" returns successfully" Jan 13 21:23:30.372615 kubelet[2638]: I0113 21:23:30.372573 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-v7wt8" podStartSLOduration=1.3725191460000001 podStartE2EDuration="1.372519146s" podCreationTimestamp="2025-01-13 21:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:30.372189893 +0000 UTC m=+15.369408838" watchObservedRunningTime="2025-01-13 21:23:30.372519146 +0000 UTC m=+15.369738101" Jan 13 21:23:30.411063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075946128.mount: Deactivated successfully. Jan 13 21:23:31.238667 containerd[1476]: time="2025-01-13T21:23:31.238589447Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:31.240008 containerd[1476]: time="2025-01-13T21:23:31.239942043Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764305" Jan 13 21:23:31.241178 containerd[1476]: time="2025-01-13T21:23:31.241089400Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:31.243927 containerd[1476]: time="2025-01-13T21:23:31.243865605Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:31.245052 containerd[1476]: time="2025-01-13T21:23:31.244863047Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.857280778s" Jan 13 21:23:31.245052 containerd[1476]: time="2025-01-13T21:23:31.244906417Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 21:23:31.247673 containerd[1476]: time="2025-01-13T21:23:31.247549666Z" level=info msg="CreateContainer within sandbox \"c5c128e3bf0ee84a47759aaee7c5d869d8b432182adb89068a09651c22cb8f2d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:23:31.271513 containerd[1476]: time="2025-01-13T21:23:31.271471994Z" level=info msg="CreateContainer within sandbox \"c5c128e3bf0ee84a47759aaee7c5d869d8b432182adb89068a09651c22cb8f2d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a07351941aa3bd29b99d4ec02d7a0553696b648fcede03dceb7ed39f1d174b4f\"" Jan 13 21:23:31.273074 containerd[1476]: time="2025-01-13T21:23:31.271968819Z" level=info msg="StartContainer for \"a07351941aa3bd29b99d4ec02d7a0553696b648fcede03dceb7ed39f1d174b4f\"" Jan 13 21:23:31.312556 systemd[1]: run-containerd-runc-k8s.io-a07351941aa3bd29b99d4ec02d7a0553696b648fcede03dceb7ed39f1d174b4f-runc.IuKK3l.mount: Deactivated successfully. Jan 13 21:23:31.319490 systemd[1]: Started cri-containerd-a07351941aa3bd29b99d4ec02d7a0553696b648fcede03dceb7ed39f1d174b4f.scope - libcontainer container a07351941aa3bd29b99d4ec02d7a0553696b648fcede03dceb7ed39f1d174b4f. Jan 13 21:23:31.351045 containerd[1476]: time="2025-01-13T21:23:31.350936293Z" level=info msg="StartContainer for \"a07351941aa3bd29b99d4ec02d7a0553696b648fcede03dceb7ed39f1d174b4f\" returns successfully" Jan 13 21:23:34.627325 kubelet[2638]: I0113 21:23:34.625541 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-z5x6j" podStartSLOduration=4.766933596 podStartE2EDuration="6.625479455s" podCreationTimestamp="2025-01-13 21:23:28 +0000 UTC" firstStartedPulling="2025-01-13 21:23:29.386733909 +0000 UTC m=+14.383952841" lastFinishedPulling="2025-01-13 21:23:31.245279755 +0000 UTC m=+16.242498700" observedRunningTime="2025-01-13 21:23:31.376337014 +0000 UTC m=+16.373555969" watchObservedRunningTime="2025-01-13 21:23:34.625479455 +0000 UTC m=+19.622698405" Jan 13 21:23:34.627325 kubelet[2638]: I0113 21:23:34.625779 2638 topology_manager.go:215] "Topology Admit Handler" podUID="db636c91-3ff3-4380-aa2d-b29389d6bfde" podNamespace="calico-system" podName="calico-typha-56645865b9-2q92d" Jan 13 21:23:34.639669 systemd[1]: Created slice kubepods-besteffort-poddb636c91_3ff3_4380_aa2d_b29389d6bfde.slice - libcontainer container kubepods-besteffort-poddb636c91_3ff3_4380_aa2d_b29389d6bfde.slice. Jan 13 21:23:34.716180 kubelet[2638]: I0113 21:23:34.716118 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db636c91-3ff3-4380-aa2d-b29389d6bfde-tigera-ca-bundle\") pod \"calico-typha-56645865b9-2q92d\" (UID: \"db636c91-3ff3-4380-aa2d-b29389d6bfde\") " pod="calico-system/calico-typha-56645865b9-2q92d" Jan 13 21:23:34.718248 kubelet[2638]: I0113 21:23:34.717108 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/db636c91-3ff3-4380-aa2d-b29389d6bfde-typha-certs\") pod \"calico-typha-56645865b9-2q92d\" (UID: \"db636c91-3ff3-4380-aa2d-b29389d6bfde\") " pod="calico-system/calico-typha-56645865b9-2q92d" Jan 13 21:23:34.718248 kubelet[2638]: I0113 21:23:34.717210 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt72v\" (UniqueName: \"kubernetes.io/projected/db636c91-3ff3-4380-aa2d-b29389d6bfde-kube-api-access-lt72v\") pod \"calico-typha-56645865b9-2q92d\" (UID: \"db636c91-3ff3-4380-aa2d-b29389d6bfde\") " pod="calico-system/calico-typha-56645865b9-2q92d" Jan 13 21:23:34.851668 kubelet[2638]: I0113 21:23:34.851637 2638 topology_manager.go:215] "Topology Admit Handler" podUID="610b9051-06a6-45c8-8eba-9bddc30d9da2" podNamespace="calico-system" podName="calico-node-wcbgw" Jan 13 21:23:34.866396 systemd[1]: Created slice kubepods-besteffort-pod610b9051_06a6_45c8_8eba_9bddc30d9da2.slice - libcontainer container kubepods-besteffort-pod610b9051_06a6_45c8_8eba_9bddc30d9da2.slice. Jan 13 21:23:34.918009 kubelet[2638]: I0113 21:23:34.917639 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/610b9051-06a6-45c8-8eba-9bddc30d9da2-policysync\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.918009 kubelet[2638]: I0113 21:23:34.917693 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/610b9051-06a6-45c8-8eba-9bddc30d9da2-var-run-calico\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.918009 kubelet[2638]: I0113 21:23:34.917724 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/610b9051-06a6-45c8-8eba-9bddc30d9da2-lib-modules\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.918855 kubelet[2638]: I0113 21:23:34.918087 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/610b9051-06a6-45c8-8eba-9bddc30d9da2-cni-bin-dir\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.918855 kubelet[2638]: I0113 21:23:34.918157 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/610b9051-06a6-45c8-8eba-9bddc30d9da2-tigera-ca-bundle\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.918855 kubelet[2638]: I0113 21:23:34.918194 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/610b9051-06a6-45c8-8eba-9bddc30d9da2-node-certs\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.918855 kubelet[2638]: I0113 21:23:34.918228 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/610b9051-06a6-45c8-8eba-9bddc30d9da2-cni-net-dir\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.918855 kubelet[2638]: I0113 21:23:34.918261 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/610b9051-06a6-45c8-8eba-9bddc30d9da2-var-lib-calico\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.919122 kubelet[2638]: I0113 21:23:34.918325 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/610b9051-06a6-45c8-8eba-9bddc30d9da2-xtables-lock\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.919122 kubelet[2638]: I0113 21:23:34.918361 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/610b9051-06a6-45c8-8eba-9bddc30d9da2-cni-log-dir\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.919122 kubelet[2638]: I0113 21:23:34.918465 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/610b9051-06a6-45c8-8eba-9bddc30d9da2-flexvol-driver-host\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.919122 kubelet[2638]: I0113 21:23:34.918553 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r7gg\" (UniqueName: \"kubernetes.io/projected/610b9051-06a6-45c8-8eba-9bddc30d9da2-kube-api-access-4r7gg\") pod \"calico-node-wcbgw\" (UID: \"610b9051-06a6-45c8-8eba-9bddc30d9da2\") " pod="calico-system/calico-node-wcbgw" Jan 13 21:23:34.946133 containerd[1476]: time="2025-01-13T21:23:34.945327838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56645865b9-2q92d,Uid:db636c91-3ff3-4380-aa2d-b29389d6bfde,Namespace:calico-system,Attempt:0,}" Jan 13 21:23:34.986303 kubelet[2638]: I0113 21:23:34.985888 2638 topology_manager.go:215] "Topology Admit Handler" podUID="13b69416-81ae-4717-b37c-e84f2bf2d81a" podNamespace="calico-system" podName="csi-node-driver-dnbrj" Jan 13 21:23:34.987137 kubelet[2638]: E0113 21:23:34.987097 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dnbrj" podUID="13b69416-81ae-4717-b37c-e84f2bf2d81a" Jan 13 21:23:35.009218 containerd[1476]: time="2025-01-13T21:23:35.008616717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:35.009496 containerd[1476]: time="2025-01-13T21:23:35.009452708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:35.009766 containerd[1476]: time="2025-01-13T21:23:35.009637456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:35.012587 containerd[1476]: time="2025-01-13T21:23:35.012532063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:35.019617 kubelet[2638]: I0113 21:23:35.019493 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13b69416-81ae-4717-b37c-e84f2bf2d81a-kubelet-dir\") pod \"csi-node-driver-dnbrj\" (UID: \"13b69416-81ae-4717-b37c-e84f2bf2d81a\") " pod="calico-system/csi-node-driver-dnbrj" Jan 13 21:23:35.019617 kubelet[2638]: I0113 21:23:35.019553 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/13b69416-81ae-4717-b37c-e84f2bf2d81a-registration-dir\") pod \"csi-node-driver-dnbrj\" (UID: \"13b69416-81ae-4717-b37c-e84f2bf2d81a\") " pod="calico-system/csi-node-driver-dnbrj" Jan 13 21:23:35.019617 kubelet[2638]: I0113 21:23:35.019590 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76nqv\" (UniqueName: \"kubernetes.io/projected/13b69416-81ae-4717-b37c-e84f2bf2d81a-kube-api-access-76nqv\") pod \"csi-node-driver-dnbrj\" (UID: \"13b69416-81ae-4717-b37c-e84f2bf2d81a\") " pod="calico-system/csi-node-driver-dnbrj" Jan 13 21:23:35.019823 kubelet[2638]: I0113 21:23:35.019681 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/13b69416-81ae-4717-b37c-e84f2bf2d81a-varrun\") pod \"csi-node-driver-dnbrj\" (UID: \"13b69416-81ae-4717-b37c-e84f2bf2d81a\") " pod="calico-system/csi-node-driver-dnbrj" Jan 13 21:23:35.019823 kubelet[2638]: I0113 21:23:35.019771 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/13b69416-81ae-4717-b37c-e84f2bf2d81a-socket-dir\") pod \"csi-node-driver-dnbrj\" (UID: \"13b69416-81ae-4717-b37c-e84f2bf2d81a\") " pod="calico-system/csi-node-driver-dnbrj" Jan 13 21:23:35.035360 kubelet[2638]: E0113 21:23:35.032919 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.035360 kubelet[2638]: W0113 21:23:35.032947 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.035360 kubelet[2638]: E0113 21:23:35.032980 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.035360 kubelet[2638]: E0113 21:23:35.034371 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.035360 kubelet[2638]: W0113 21:23:35.034388 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.035360 kubelet[2638]: E0113 21:23:35.034805 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.035360 kubelet[2638]: W0113 21:23:35.034821 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.035360 kubelet[2638]: E0113 21:23:35.035153 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.035360 kubelet[2638]: W0113 21:23:35.035168 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.035360 kubelet[2638]: E0113 21:23:35.035203 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.035955 kubelet[2638]: E0113 21:23:35.035540 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.035955 kubelet[2638]: W0113 21:23:35.035554 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.035955 kubelet[2638]: E0113 21:23:35.035599 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.035955 kubelet[2638]: E0113 21:23:35.035762 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.035955 kubelet[2638]: E0113 21:23:35.035790 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.036197 kubelet[2638]: E0113 21:23:35.036028 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.036197 kubelet[2638]: W0113 21:23:35.036040 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.036197 kubelet[2638]: E0113 21:23:35.036102 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.037818 kubelet[2638]: E0113 21:23:35.037190 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.038356 kubelet[2638]: W0113 21:23:35.038078 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.038356 kubelet[2638]: E0113 21:23:35.038124 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.043450 kubelet[2638]: E0113 21:23:35.041863 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.043450 kubelet[2638]: W0113 21:23:35.041895 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.043450 kubelet[2638]: E0113 21:23:35.041917 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.043968 kubelet[2638]: E0113 21:23:35.043707 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.043968 kubelet[2638]: W0113 21:23:35.043727 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.043968 kubelet[2638]: E0113 21:23:35.043748 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.046034 kubelet[2638]: E0113 21:23:35.045572 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.046034 kubelet[2638]: W0113 21:23:35.045603 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.046034 kubelet[2638]: E0113 21:23:35.045625 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.051905 kubelet[2638]: E0113 21:23:35.051374 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.051905 kubelet[2638]: W0113 21:23:35.051406 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.051905 kubelet[2638]: E0113 21:23:35.051428 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.058334 kubelet[2638]: E0113 21:23:35.058018 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.058334 kubelet[2638]: W0113 21:23:35.058038 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.058334 kubelet[2638]: E0113 21:23:35.058065 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.060908 kubelet[2638]: E0113 21:23:35.060061 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.060908 kubelet[2638]: W0113 21:23:35.060827 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.060908 kubelet[2638]: E0113 21:23:35.060860 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.079504 systemd[1]: Started cri-containerd-13b457f7c0c9d70bbccf76003f43bdb1817779955b76c06d104dfd0e94549007.scope - libcontainer container 13b457f7c0c9d70bbccf76003f43bdb1817779955b76c06d104dfd0e94549007. Jan 13 21:23:35.091944 kubelet[2638]: E0113 21:23:35.091924 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.092104 kubelet[2638]: W0113 21:23:35.092077 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.092341 kubelet[2638]: E0113 21:23:35.092172 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.121803 kubelet[2638]: E0113 21:23:35.121524 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.121803 kubelet[2638]: W0113 21:23:35.121546 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.121803 kubelet[2638]: E0113 21:23:35.121570 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.122940 kubelet[2638]: E0113 21:23:35.122682 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.122940 kubelet[2638]: W0113 21:23:35.122701 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.123544 kubelet[2638]: E0113 21:23:35.123210 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.124856 kubelet[2638]: E0113 21:23:35.124643 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.124856 kubelet[2638]: W0113 21:23:35.124668 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.124856 kubelet[2638]: E0113 21:23:35.124717 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.126737 kubelet[2638]: E0113 21:23:35.126400 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.126737 kubelet[2638]: W0113 21:23:35.126422 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.126737 kubelet[2638]: E0113 21:23:35.126679 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.127260 kubelet[2638]: E0113 21:23:35.127170 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.127260 kubelet[2638]: W0113 21:23:35.127186 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.127652 kubelet[2638]: E0113 21:23:35.127537 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.129328 kubelet[2638]: E0113 21:23:35.128431 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.129328 kubelet[2638]: W0113 21:23:35.128452 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.129709 kubelet[2638]: E0113 21:23:35.129530 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.129901 kubelet[2638]: E0113 21:23:35.129880 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.130154 kubelet[2638]: W0113 21:23:35.130003 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.130348 kubelet[2638]: E0113 21:23:35.130327 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.130825 kubelet[2638]: E0113 21:23:35.130781 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.130825 kubelet[2638]: W0113 21:23:35.130800 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.131251 kubelet[2638]: E0113 21:23:35.131081 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.131536 kubelet[2638]: E0113 21:23:35.131516 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.131785 kubelet[2638]: W0113 21:23:35.131639 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.131934 kubelet[2638]: E0113 21:23:35.131912 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.132319 kubelet[2638]: E0113 21:23:35.132276 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.132319 kubelet[2638]: W0113 21:23:35.132314 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.132999 kubelet[2638]: E0113 21:23:35.132969 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.133637 kubelet[2638]: E0113 21:23:35.133577 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.133637 kubelet[2638]: W0113 21:23:35.133604 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.134437 kubelet[2638]: E0113 21:23:35.134407 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.134786 kubelet[2638]: E0113 21:23:35.134760 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.134786 kubelet[2638]: W0113 21:23:35.134784 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.135389 kubelet[2638]: E0113 21:23:35.135358 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.136485 kubelet[2638]: E0113 21:23:35.136410 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.136485 kubelet[2638]: W0113 21:23:35.136432 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.136652 kubelet[2638]: E0113 21:23:35.136569 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.136900 kubelet[2638]: E0113 21:23:35.136824 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.136900 kubelet[2638]: W0113 21:23:35.136844 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.137364 kubelet[2638]: E0113 21:23:35.137334 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.138674 kubelet[2638]: E0113 21:23:35.138634 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.138674 kubelet[2638]: W0113 21:23:35.138657 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.138900 kubelet[2638]: E0113 21:23:35.138833 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.139157 kubelet[2638]: E0113 21:23:35.139124 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.139157 kubelet[2638]: W0113 21:23:35.139142 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.139373 kubelet[2638]: E0113 21:23:35.139245 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.140724 kubelet[2638]: E0113 21:23:35.140689 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.140724 kubelet[2638]: W0113 21:23:35.140714 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.140984 kubelet[2638]: E0113 21:23:35.140865 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.141355 kubelet[2638]: E0113 21:23:35.141334 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.141355 kubelet[2638]: W0113 21:23:35.141354 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.141497 kubelet[2638]: E0113 21:23:35.141476 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.142418 kubelet[2638]: E0113 21:23:35.142391 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.142418 kubelet[2638]: W0113 21:23:35.142416 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.142779 kubelet[2638]: E0113 21:23:35.142570 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.143404 kubelet[2638]: E0113 21:23:35.143381 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.143404 kubelet[2638]: W0113 21:23:35.143402 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.143877 kubelet[2638]: E0113 21:23:35.143849 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.144277 kubelet[2638]: E0113 21:23:35.144245 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.144277 kubelet[2638]: W0113 21:23:35.144269 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.144481 kubelet[2638]: E0113 21:23:35.144398 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.145426 kubelet[2638]: E0113 21:23:35.145399 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.145426 kubelet[2638]: W0113 21:23:35.145425 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.146366 kubelet[2638]: E0113 21:23:35.146338 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.147053 kubelet[2638]: E0113 21:23:35.146708 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.147053 kubelet[2638]: W0113 21:23:35.146727 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.147053 kubelet[2638]: E0113 21:23:35.146828 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.147410 kubelet[2638]: E0113 21:23:35.147385 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.147410 kubelet[2638]: W0113 21:23:35.147409 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.147559 kubelet[2638]: E0113 21:23:35.147438 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.152397 kubelet[2638]: E0113 21:23:35.152369 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.152397 kubelet[2638]: W0113 21:23:35.152395 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.152546 kubelet[2638]: E0113 21:23:35.152421 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.175885 containerd[1476]: time="2025-01-13T21:23:35.175657041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wcbgw,Uid:610b9051-06a6-45c8-8eba-9bddc30d9da2,Namespace:calico-system,Attempt:0,}" Jan 13 21:23:35.191270 kubelet[2638]: E0113 21:23:35.191243 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:35.191270 kubelet[2638]: W0113 21:23:35.191267 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:35.192058 kubelet[2638]: E0113 21:23:35.191853 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:35.250346 containerd[1476]: time="2025-01-13T21:23:35.249397132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:35.250346 containerd[1476]: time="2025-01-13T21:23:35.249681714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:35.250346 containerd[1476]: time="2025-01-13T21:23:35.249903109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:35.251941 containerd[1476]: time="2025-01-13T21:23:35.250513255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:35.261366 containerd[1476]: time="2025-01-13T21:23:35.261121251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56645865b9-2q92d,Uid:db636c91-3ff3-4380-aa2d-b29389d6bfde,Namespace:calico-system,Attempt:0,} returns sandbox id \"13b457f7c0c9d70bbccf76003f43bdb1817779955b76c06d104dfd0e94549007\"" Jan 13 21:23:35.265307 containerd[1476]: time="2025-01-13T21:23:35.265251529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:23:35.295552 systemd[1]: Started cri-containerd-78d33da9a4ff8a5a2df72dc63bd5f5a1ec353eee0cd314ecd31ed29bbd80e543.scope - libcontainer container 78d33da9a4ff8a5a2df72dc63bd5f5a1ec353eee0cd314ecd31ed29bbd80e543. Jan 13 21:23:35.352775 containerd[1476]: time="2025-01-13T21:23:35.352587518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wcbgw,Uid:610b9051-06a6-45c8-8eba-9bddc30d9da2,Namespace:calico-system,Attempt:0,} returns sandbox id \"78d33da9a4ff8a5a2df72dc63bd5f5a1ec353eee0cd314ecd31ed29bbd80e543\"" Jan 13 21:23:36.210175 kubelet[2638]: E0113 21:23:36.210138 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dnbrj" podUID="13b69416-81ae-4717-b37c-e84f2bf2d81a" Jan 13 21:23:36.236942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3387812443.mount: Deactivated successfully. Jan 13 21:23:37.056764 containerd[1476]: time="2025-01-13T21:23:37.056701452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:37.058029 containerd[1476]: time="2025-01-13T21:23:37.057960175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 21:23:37.059412 containerd[1476]: time="2025-01-13T21:23:37.059344422Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:37.062174 containerd[1476]: time="2025-01-13T21:23:37.062110457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:37.063743 containerd[1476]: time="2025-01-13T21:23:37.063005676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.797618614s" Jan 13 21:23:37.063743 containerd[1476]: time="2025-01-13T21:23:37.063049422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:23:37.064742 containerd[1476]: time="2025-01-13T21:23:37.064516438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:23:37.084276 containerd[1476]: time="2025-01-13T21:23:37.084235098Z" level=info msg="CreateContainer within sandbox \"13b457f7c0c9d70bbccf76003f43bdb1817779955b76c06d104dfd0e94549007\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:23:37.103448 containerd[1476]: time="2025-01-13T21:23:37.103420209Z" level=info msg="CreateContainer within sandbox \"13b457f7c0c9d70bbccf76003f43bdb1817779955b76c06d104dfd0e94549007\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"52fb60b70689b65ec55d6f955c7246d91df1b1cdd152dcccb1b8f9f305034c1e\"" Jan 13 21:23:37.105445 containerd[1476]: time="2025-01-13T21:23:37.103960184Z" level=info msg="StartContainer for \"52fb60b70689b65ec55d6f955c7246d91df1b1cdd152dcccb1b8f9f305034c1e\"" Jan 13 21:23:37.150510 systemd[1]: Started cri-containerd-52fb60b70689b65ec55d6f955c7246d91df1b1cdd152dcccb1b8f9f305034c1e.scope - libcontainer container 52fb60b70689b65ec55d6f955c7246d91df1b1cdd152dcccb1b8f9f305034c1e. Jan 13 21:23:37.213974 containerd[1476]: time="2025-01-13T21:23:37.213921105Z" level=info msg="StartContainer for \"52fb60b70689b65ec55d6f955c7246d91df1b1cdd152dcccb1b8f9f305034c1e\" returns successfully" Jan 13 21:23:37.397691 kubelet[2638]: I0113 21:23:37.396561 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-56645865b9-2q92d" podStartSLOduration=1.597279085 podStartE2EDuration="3.396508391s" podCreationTimestamp="2025-01-13 21:23:34 +0000 UTC" firstStartedPulling="2025-01-13 21:23:35.264167898 +0000 UTC m=+20.261386843" lastFinishedPulling="2025-01-13 21:23:37.063397201 +0000 UTC m=+22.060616149" observedRunningTime="2025-01-13 21:23:37.396255813 +0000 UTC m=+22.393474762" watchObservedRunningTime="2025-01-13 21:23:37.396508391 +0000 UTC m=+22.393727345" Jan 13 21:23:37.405790 kubelet[2638]: E0113 21:23:37.405762 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.405790 kubelet[2638]: W0113 21:23:37.405786 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.406011 kubelet[2638]: E0113 21:23:37.405813 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.406171 kubelet[2638]: E0113 21:23:37.406143 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.406171 kubelet[2638]: W0113 21:23:37.406160 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.406397 kubelet[2638]: E0113 21:23:37.406182 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.406522 kubelet[2638]: E0113 21:23:37.406489 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.406522 kubelet[2638]: W0113 21:23:37.406503 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.406652 kubelet[2638]: E0113 21:23:37.406525 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.406864 kubelet[2638]: E0113 21:23:37.406842 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.406864 kubelet[2638]: W0113 21:23:37.406861 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.407092 kubelet[2638]: E0113 21:23:37.406882 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.407267 kubelet[2638]: E0113 21:23:37.407250 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.407355 kubelet[2638]: W0113 21:23:37.407318 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.407410 kubelet[2638]: E0113 21:23:37.407359 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.407776 kubelet[2638]: E0113 21:23:37.407749 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.407776 kubelet[2638]: W0113 21:23:37.407768 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.407992 kubelet[2638]: E0113 21:23:37.407789 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.408183 kubelet[2638]: E0113 21:23:37.408162 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.408183 kubelet[2638]: W0113 21:23:37.408180 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.408416 kubelet[2638]: E0113 21:23:37.408200 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.408527 kubelet[2638]: E0113 21:23:37.408508 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.408527 kubelet[2638]: W0113 21:23:37.408525 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.408732 kubelet[2638]: E0113 21:23:37.408544 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.408862 kubelet[2638]: E0113 21:23:37.408842 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.408862 kubelet[2638]: W0113 21:23:37.408860 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.409044 kubelet[2638]: E0113 21:23:37.408878 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.409177 kubelet[2638]: E0113 21:23:37.409160 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.409177 kubelet[2638]: W0113 21:23:37.409176 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.409458 kubelet[2638]: E0113 21:23:37.409195 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.409549 kubelet[2638]: E0113 21:23:37.409492 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.409549 kubelet[2638]: W0113 21:23:37.409507 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.409549 kubelet[2638]: E0113 21:23:37.409527 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.409867 kubelet[2638]: E0113 21:23:37.409838 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.409867 kubelet[2638]: W0113 21:23:37.409867 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.410002 kubelet[2638]: E0113 21:23:37.409887 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.410210 kubelet[2638]: E0113 21:23:37.410190 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.410210 kubelet[2638]: W0113 21:23:37.410208 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.410588 kubelet[2638]: E0113 21:23:37.410227 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.410588 kubelet[2638]: E0113 21:23:37.410526 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.410588 kubelet[2638]: W0113 21:23:37.410540 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.410588 kubelet[2638]: E0113 21:23:37.410561 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.410919 kubelet[2638]: E0113 21:23:37.410905 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.410919 kubelet[2638]: W0113 21:23:37.410919 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.411027 kubelet[2638]: E0113 21:23:37.410951 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.456366 kubelet[2638]: E0113 21:23:37.456330 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.456366 kubelet[2638]: W0113 21:23:37.456353 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.456608 kubelet[2638]: E0113 21:23:37.456376 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.456878 kubelet[2638]: E0113 21:23:37.456842 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.456878 kubelet[2638]: W0113 21:23:37.456869 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.457119 kubelet[2638]: E0113 21:23:37.456905 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.457348 kubelet[2638]: E0113 21:23:37.457325 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.457348 kubelet[2638]: W0113 21:23:37.457346 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.457585 kubelet[2638]: E0113 21:23:37.457376 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.457788 kubelet[2638]: E0113 21:23:37.457767 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.457954 kubelet[2638]: W0113 21:23:37.457899 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.457954 kubelet[2638]: E0113 21:23:37.457944 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.458351 kubelet[2638]: E0113 21:23:37.458327 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.458351 kubelet[2638]: W0113 21:23:37.458348 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.458488 kubelet[2638]: E0113 21:23:37.458378 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.458940 kubelet[2638]: E0113 21:23:37.458728 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.458940 kubelet[2638]: W0113 21:23:37.458749 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.458940 kubelet[2638]: E0113 21:23:37.458787 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.459271 kubelet[2638]: E0113 21:23:37.459100 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.459271 kubelet[2638]: W0113 21:23:37.459115 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.459271 kubelet[2638]: E0113 21:23:37.459172 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.459595 kubelet[2638]: E0113 21:23:37.459454 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.459595 kubelet[2638]: W0113 21:23:37.459469 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.459595 kubelet[2638]: E0113 21:23:37.459509 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.459802 kubelet[2638]: E0113 21:23:37.459772 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.459802 kubelet[2638]: W0113 21:23:37.459792 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.460174 kubelet[2638]: E0113 21:23:37.459825 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.460555 kubelet[2638]: E0113 21:23:37.460526 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.460555 kubelet[2638]: W0113 21:23:37.460549 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.460807 kubelet[2638]: E0113 21:23:37.460580 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.460965 kubelet[2638]: E0113 21:23:37.460942 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.460965 kubelet[2638]: W0113 21:23:37.460962 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.461231 kubelet[2638]: E0113 21:23:37.461138 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.461404 kubelet[2638]: E0113 21:23:37.461342 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.461404 kubelet[2638]: W0113 21:23:37.461357 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.461590 kubelet[2638]: E0113 21:23:37.461497 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.461725 kubelet[2638]: E0113 21:23:37.461705 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.461725 kubelet[2638]: W0113 21:23:37.461723 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.461855 kubelet[2638]: E0113 21:23:37.461824 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.462165 kubelet[2638]: E0113 21:23:37.462143 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.462165 kubelet[2638]: W0113 21:23:37.462162 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.462352 kubelet[2638]: E0113 21:23:37.462232 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.462808 kubelet[2638]: E0113 21:23:37.462784 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.462808 kubelet[2638]: W0113 21:23:37.462804 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.462945 kubelet[2638]: E0113 21:23:37.462937 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.463190 kubelet[2638]: E0113 21:23:37.463167 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.463190 kubelet[2638]: W0113 21:23:37.463186 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.463374 kubelet[2638]: E0113 21:23:37.463208 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.463666 kubelet[2638]: E0113 21:23:37.463635 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.463666 kubelet[2638]: W0113 21:23:37.463663 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.463800 kubelet[2638]: E0113 21:23:37.463685 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:37.464273 kubelet[2638]: E0113 21:23:37.464249 2638 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:23:37.464273 kubelet[2638]: W0113 21:23:37.464269 2638 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:23:37.464447 kubelet[2638]: E0113 21:23:37.464326 2638 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:23:38.040372 containerd[1476]: time="2025-01-13T21:23:38.039927550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:38.041127 containerd[1476]: time="2025-01-13T21:23:38.041055949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 21:23:38.042422 containerd[1476]: time="2025-01-13T21:23:38.042358197Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:38.045304 containerd[1476]: time="2025-01-13T21:23:38.045250067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:38.046374 containerd[1476]: time="2025-01-13T21:23:38.046192821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 981.616323ms" Jan 13 21:23:38.046374 containerd[1476]: time="2025-01-13T21:23:38.046241428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:23:38.048820 containerd[1476]: time="2025-01-13T21:23:38.048620842Z" level=info msg="CreateContainer within sandbox \"78d33da9a4ff8a5a2df72dc63bd5f5a1ec353eee0cd314ecd31ed29bbd80e543\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:23:38.063746 containerd[1476]: time="2025-01-13T21:23:38.063701399Z" level=info msg="CreateContainer within sandbox \"78d33da9a4ff8a5a2df72dc63bd5f5a1ec353eee0cd314ecd31ed29bbd80e543\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b7b59d5588df5e1f70be610167c77003850ff341ab16436aafb9bfaa7b080e13\"" Jan 13 21:23:38.064858 containerd[1476]: time="2025-01-13T21:23:38.064786821Z" level=info msg="StartContainer for \"b7b59d5588df5e1f70be610167c77003850ff341ab16436aafb9bfaa7b080e13\"" Jan 13 21:23:38.121498 systemd[1]: Started cri-containerd-b7b59d5588df5e1f70be610167c77003850ff341ab16436aafb9bfaa7b080e13.scope - libcontainer container b7b59d5588df5e1f70be610167c77003850ff341ab16436aafb9bfaa7b080e13. Jan 13 21:23:38.156555 containerd[1476]: time="2025-01-13T21:23:38.156423039Z" level=info msg="StartContainer for \"b7b59d5588df5e1f70be610167c77003850ff341ab16436aafb9bfaa7b080e13\" returns successfully" Jan 13 21:23:38.171587 systemd[1]: cri-containerd-b7b59d5588df5e1f70be610167c77003850ff341ab16436aafb9bfaa7b080e13.scope: Deactivated successfully. Jan 13 21:23:38.206131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7b59d5588df5e1f70be610167c77003850ff341ab16436aafb9bfaa7b080e13-rootfs.mount: Deactivated successfully. Jan 13 21:23:38.211275 kubelet[2638]: E0113 21:23:38.210147 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dnbrj" podUID="13b69416-81ae-4717-b37c-e84f2bf2d81a" Jan 13 21:23:38.389015 kubelet[2638]: I0113 21:23:38.388122 2638 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:23:38.794262 containerd[1476]: time="2025-01-13T21:23:38.794087937Z" level=info msg="shim disconnected" id=b7b59d5588df5e1f70be610167c77003850ff341ab16436aafb9bfaa7b080e13 namespace=k8s.io Jan 13 21:23:38.794262 containerd[1476]: time="2025-01-13T21:23:38.794177558Z" level=warning msg="cleaning up after shim disconnected" id=b7b59d5588df5e1f70be610167c77003850ff341ab16436aafb9bfaa7b080e13 namespace=k8s.io Jan 13 21:23:38.794262 containerd[1476]: time="2025-01-13T21:23:38.794197389Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:23:39.393127 containerd[1476]: time="2025-01-13T21:23:39.393073435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:23:40.210207 kubelet[2638]: E0113 21:23:40.210146 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dnbrj" podUID="13b69416-81ae-4717-b37c-e84f2bf2d81a" Jan 13 21:23:42.210056 kubelet[2638]: E0113 21:23:42.210012 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dnbrj" podUID="13b69416-81ae-4717-b37c-e84f2bf2d81a" Jan 13 21:23:43.202085 containerd[1476]: time="2025-01-13T21:23:43.202032690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:43.204542 containerd[1476]: time="2025-01-13T21:23:43.204458119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:23:43.205152 containerd[1476]: time="2025-01-13T21:23:43.205103580Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:43.208548 containerd[1476]: time="2025-01-13T21:23:43.208461576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:43.211271 containerd[1476]: time="2025-01-13T21:23:43.210983230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.817851011s" Jan 13 21:23:43.211271 containerd[1476]: time="2025-01-13T21:23:43.211039438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:23:43.216568 containerd[1476]: time="2025-01-13T21:23:43.216232491Z" level=info msg="CreateContainer within sandbox \"78d33da9a4ff8a5a2df72dc63bd5f5a1ec353eee0cd314ecd31ed29bbd80e543\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:23:43.239386 containerd[1476]: time="2025-01-13T21:23:43.239030119Z" level=info msg="CreateContainer within sandbox \"78d33da9a4ff8a5a2df72dc63bd5f5a1ec353eee0cd314ecd31ed29bbd80e543\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e007bf4a83eb3df7f0001f263eda7a4322aaf91ab7394b810ee5998dd23437ba\"" Jan 13 21:23:43.240100 containerd[1476]: time="2025-01-13T21:23:43.240069609Z" level=info msg="StartContainer for \"e007bf4a83eb3df7f0001f263eda7a4322aaf91ab7394b810ee5998dd23437ba\"" Jan 13 21:23:43.292939 systemd[1]: Started cri-containerd-e007bf4a83eb3df7f0001f263eda7a4322aaf91ab7394b810ee5998dd23437ba.scope - libcontainer container e007bf4a83eb3df7f0001f263eda7a4322aaf91ab7394b810ee5998dd23437ba. Jan 13 21:23:43.329808 containerd[1476]: time="2025-01-13T21:23:43.329752554Z" level=info msg="StartContainer for \"e007bf4a83eb3df7f0001f263eda7a4322aaf91ab7394b810ee5998dd23437ba\" returns successfully" Jan 13 21:23:44.210340 kubelet[2638]: E0113 21:23:44.210252 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dnbrj" podUID="13b69416-81ae-4717-b37c-e84f2bf2d81a" Jan 13 21:23:44.280390 systemd[1]: cri-containerd-e007bf4a83eb3df7f0001f263eda7a4322aaf91ab7394b810ee5998dd23437ba.scope: Deactivated successfully. Jan 13 21:23:44.312758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e007bf4a83eb3df7f0001f263eda7a4322aaf91ab7394b810ee5998dd23437ba-rootfs.mount: Deactivated successfully. Jan 13 21:23:44.320013 kubelet[2638]: I0113 21:23:44.319784 2638 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:23:44.349320 kubelet[2638]: I0113 21:23:44.347907 2638 topology_manager.go:215] "Topology Admit Handler" podUID="149ffd71-7f07-4581-ab1a-ae1e649167c5" podNamespace="kube-system" podName="coredns-76f75df574-xmx4z" Jan 13 21:23:44.353932 kubelet[2638]: I0113 21:23:44.353892 2638 topology_manager.go:215] "Topology Admit Handler" podUID="8743f2ec-4b00-4436-8e8d-31d6c433e17f" podNamespace="calico-apiserver" podName="calico-apiserver-b8b66ff96-zrfxq" Jan 13 21:23:44.355690 kubelet[2638]: I0113 21:23:44.355569 2638 topology_manager.go:215] "Topology Admit Handler" podUID="ea8ed1ad-5383-412b-b6fb-8d38f75a663b" podNamespace="calico-system" podName="calico-kube-controllers-6f864b4c96-r64tw" Jan 13 21:23:44.355800 kubelet[2638]: I0113 21:23:44.355774 2638 topology_manager.go:215] "Topology Admit Handler" podUID="47c77e8b-ecd7-41c6-9767-d6b9be1c4f20" podNamespace="kube-system" podName="coredns-76f75df574-jgmn7" Jan 13 21:23:44.356985 kubelet[2638]: I0113 21:23:44.356767 2638 topology_manager.go:215] "Topology Admit Handler" podUID="f4e98d21-89de-4cfb-bd25-889f4d6587ef" podNamespace="calico-apiserver" podName="calico-apiserver-b8b66ff96-wwr94" Jan 13 21:23:44.377655 systemd[1]: Created slice kubepods-burstable-pod149ffd71_7f07_4581_ab1a_ae1e649167c5.slice - libcontainer container kubepods-burstable-pod149ffd71_7f07_4581_ab1a_ae1e649167c5.slice. Jan 13 21:23:44.399821 systemd[1]: Created slice kubepods-burstable-pod47c77e8b_ecd7_41c6_9767_d6b9be1c4f20.slice - libcontainer container kubepods-burstable-pod47c77e8b_ecd7_41c6_9767_d6b9be1c4f20.slice. Jan 13 21:23:44.408790 kubelet[2638]: I0113 21:23:44.406571 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/149ffd71-7f07-4581-ab1a-ae1e649167c5-config-volume\") pod \"coredns-76f75df574-xmx4z\" (UID: \"149ffd71-7f07-4581-ab1a-ae1e649167c5\") " pod="kube-system/coredns-76f75df574-xmx4z" Jan 13 21:23:44.408790 kubelet[2638]: I0113 21:23:44.406626 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47c77e8b-ecd7-41c6-9767-d6b9be1c4f20-config-volume\") pod \"coredns-76f75df574-jgmn7\" (UID: \"47c77e8b-ecd7-41c6-9767-d6b9be1c4f20\") " pod="kube-system/coredns-76f75df574-jgmn7" Jan 13 21:23:44.408790 kubelet[2638]: I0113 21:23:44.406675 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85k8\" (UniqueName: \"kubernetes.io/projected/8743f2ec-4b00-4436-8e8d-31d6c433e17f-kube-api-access-b85k8\") pod \"calico-apiserver-b8b66ff96-zrfxq\" (UID: \"8743f2ec-4b00-4436-8e8d-31d6c433e17f\") " pod="calico-apiserver/calico-apiserver-b8b66ff96-zrfxq" Jan 13 21:23:44.408790 kubelet[2638]: I0113 21:23:44.406716 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44bj\" (UniqueName: \"kubernetes.io/projected/ea8ed1ad-5383-412b-b6fb-8d38f75a663b-kube-api-access-x44bj\") pod \"calico-kube-controllers-6f864b4c96-r64tw\" (UID: \"ea8ed1ad-5383-412b-b6fb-8d38f75a663b\") " pod="calico-system/calico-kube-controllers-6f864b4c96-r64tw" Jan 13 21:23:44.408790 kubelet[2638]: I0113 21:23:44.406751 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2hx2\" (UniqueName: \"kubernetes.io/projected/f4e98d21-89de-4cfb-bd25-889f4d6587ef-kube-api-access-r2hx2\") pod \"calico-apiserver-b8b66ff96-wwr94\" (UID: \"f4e98d21-89de-4cfb-bd25-889f4d6587ef\") " pod="calico-apiserver/calico-apiserver-b8b66ff96-wwr94" Jan 13 21:23:44.409125 kubelet[2638]: I0113 21:23:44.406784 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-444jg\" (UniqueName: \"kubernetes.io/projected/149ffd71-7f07-4581-ab1a-ae1e649167c5-kube-api-access-444jg\") pod \"coredns-76f75df574-xmx4z\" (UID: \"149ffd71-7f07-4581-ab1a-ae1e649167c5\") " pod="kube-system/coredns-76f75df574-xmx4z" Jan 13 21:23:44.409125 kubelet[2638]: I0113 21:23:44.406822 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea8ed1ad-5383-412b-b6fb-8d38f75a663b-tigera-ca-bundle\") pod \"calico-kube-controllers-6f864b4c96-r64tw\" (UID: \"ea8ed1ad-5383-412b-b6fb-8d38f75a663b\") " pod="calico-system/calico-kube-controllers-6f864b4c96-r64tw" Jan 13 21:23:44.409125 kubelet[2638]: I0113 21:23:44.406861 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps66c\" (UniqueName: \"kubernetes.io/projected/47c77e8b-ecd7-41c6-9767-d6b9be1c4f20-kube-api-access-ps66c\") pod \"coredns-76f75df574-jgmn7\" (UID: \"47c77e8b-ecd7-41c6-9767-d6b9be1c4f20\") " pod="kube-system/coredns-76f75df574-jgmn7" Jan 13 21:23:44.409125 kubelet[2638]: I0113 21:23:44.406903 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8743f2ec-4b00-4436-8e8d-31d6c433e17f-calico-apiserver-certs\") pod \"calico-apiserver-b8b66ff96-zrfxq\" (UID: \"8743f2ec-4b00-4436-8e8d-31d6c433e17f\") " pod="calico-apiserver/calico-apiserver-b8b66ff96-zrfxq" Jan 13 21:23:44.409125 kubelet[2638]: I0113 21:23:44.406945 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4e98d21-89de-4cfb-bd25-889f4d6587ef-calico-apiserver-certs\") pod \"calico-apiserver-b8b66ff96-wwr94\" (UID: \"f4e98d21-89de-4cfb-bd25-889f4d6587ef\") " pod="calico-apiserver/calico-apiserver-b8b66ff96-wwr94" Jan 13 21:23:44.414660 systemd[1]: Created slice kubepods-besteffort-podf4e98d21_89de_4cfb_bd25_889f4d6587ef.slice - libcontainer container kubepods-besteffort-podf4e98d21_89de_4cfb_bd25_889f4d6587ef.slice. Jan 13 21:23:44.428534 systemd[1]: Created slice kubepods-besteffort-pod8743f2ec_4b00_4436_8e8d_31d6c433e17f.slice - libcontainer container kubepods-besteffort-pod8743f2ec_4b00_4436_8e8d_31d6c433e17f.slice. Jan 13 21:23:44.436778 systemd[1]: Created slice kubepods-besteffort-podea8ed1ad_5383_412b_b6fb_8d38f75a663b.slice - libcontainer container kubepods-besteffort-podea8ed1ad_5383_412b_b6fb_8d38f75a663b.slice. Jan 13 21:23:44.689963 containerd[1476]: time="2025-01-13T21:23:44.689909167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xmx4z,Uid:149ffd71-7f07-4581-ab1a-ae1e649167c5,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:44.712143 containerd[1476]: time="2025-01-13T21:23:44.712086319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jgmn7,Uid:47c77e8b-ecd7-41c6-9767-d6b9be1c4f20,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:44.722104 containerd[1476]: time="2025-01-13T21:23:44.722045940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b8b66ff96-wwr94,Uid:f4e98d21-89de-4cfb-bd25-889f4d6587ef,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:23:44.734067 containerd[1476]: time="2025-01-13T21:23:44.734013837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b8b66ff96-zrfxq,Uid:8743f2ec-4b00-4436-8e8d-31d6c433e17f,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:23:44.743211 containerd[1476]: time="2025-01-13T21:23:44.743168747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f864b4c96-r64tw,Uid:ea8ed1ad-5383-412b-b6fb-8d38f75a663b,Namespace:calico-system,Attempt:0,}" Jan 13 21:23:45.076522 containerd[1476]: time="2025-01-13T21:23:45.076417813Z" level=info msg="shim disconnected" id=e007bf4a83eb3df7f0001f263eda7a4322aaf91ab7394b810ee5998dd23437ba namespace=k8s.io Jan 13 21:23:45.076522 containerd[1476]: time="2025-01-13T21:23:45.076520432Z" level=warning msg="cleaning up after shim disconnected" id=e007bf4a83eb3df7f0001f263eda7a4322aaf91ab7394b810ee5998dd23437ba namespace=k8s.io Jan 13 21:23:45.076522 containerd[1476]: time="2025-01-13T21:23:45.076534358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:23:45.315690 containerd[1476]: time="2025-01-13T21:23:45.314351823Z" level=error msg="Failed to destroy network for sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.315690 containerd[1476]: time="2025-01-13T21:23:45.314799703Z" level=error msg="encountered an error cleaning up failed sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.315690 containerd[1476]: time="2025-01-13T21:23:45.314870706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f864b4c96-r64tw,Uid:ea8ed1ad-5383-412b-b6fb-8d38f75a663b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.316016 kubelet[2638]: E0113 21:23:45.315168 2638 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.316016 kubelet[2638]: E0113 21:23:45.315250 2638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f864b4c96-r64tw" Jan 13 21:23:45.316016 kubelet[2638]: E0113 21:23:45.315311 2638 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f864b4c96-r64tw" Jan 13 21:23:45.316627 kubelet[2638]: E0113 21:23:45.315402 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f864b4c96-r64tw_calico-system(ea8ed1ad-5383-412b-b6fb-8d38f75a663b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f864b4c96-r64tw_calico-system(ea8ed1ad-5383-412b-b6fb-8d38f75a663b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f864b4c96-r64tw" podUID="ea8ed1ad-5383-412b-b6fb-8d38f75a663b" Jan 13 21:23:45.350134 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b-shm.mount: Deactivated successfully. Jan 13 21:23:45.375219 containerd[1476]: time="2025-01-13T21:23:45.374340780Z" level=error msg="Failed to destroy network for sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.375718 containerd[1476]: time="2025-01-13T21:23:45.375675498Z" level=error msg="encountered an error cleaning up failed sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.377790 containerd[1476]: time="2025-01-13T21:23:45.375883756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b8b66ff96-zrfxq,Uid:8743f2ec-4b00-4436-8e8d-31d6c433e17f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.378453 kubelet[2638]: E0113 21:23:45.378236 2638 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.378453 kubelet[2638]: E0113 21:23:45.378384 2638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b8b66ff96-zrfxq" Jan 13 21:23:45.378844 kubelet[2638]: E0113 21:23:45.378696 2638 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b8b66ff96-zrfxq" Jan 13 21:23:45.380624 kubelet[2638]: E0113 21:23:45.379544 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b8b66ff96-zrfxq_calico-apiserver(8743f2ec-4b00-4436-8e8d-31d6c433e17f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b8b66ff96-zrfxq_calico-apiserver(8743f2ec-4b00-4436-8e8d-31d6c433e17f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b8b66ff96-zrfxq" podUID="8743f2ec-4b00-4436-8e8d-31d6c433e17f" Jan 13 21:23:45.381635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8-shm.mount: Deactivated successfully. Jan 13 21:23:45.410529 containerd[1476]: time="2025-01-13T21:23:45.410483101Z" level=error msg="Failed to destroy network for sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.414577 containerd[1476]: time="2025-01-13T21:23:45.414414708Z" level=error msg="Failed to destroy network for sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.418209 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964-shm.mount: Deactivated successfully. Jan 13 21:23:45.418984 containerd[1476]: time="2025-01-13T21:23:45.418588899Z" level=error msg="encountered an error cleaning up failed sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.418984 containerd[1476]: time="2025-01-13T21:23:45.418656669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b8b66ff96-wwr94,Uid:f4e98d21-89de-4cfb-bd25-889f4d6587ef,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.421696 containerd[1476]: time="2025-01-13T21:23:45.419475339Z" level=error msg="encountered an error cleaning up failed sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.421696 containerd[1476]: time="2025-01-13T21:23:45.419536382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xmx4z,Uid:149ffd71-7f07-4581-ab1a-ae1e649167c5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.421838 kubelet[2638]: E0113 21:23:45.419779 2638 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.421838 kubelet[2638]: E0113 21:23:45.419849 2638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xmx4z" Jan 13 21:23:45.421838 kubelet[2638]: E0113 21:23:45.419885 2638 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xmx4z" Jan 13 21:23:45.422021 kubelet[2638]: E0113 21:23:45.419972 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xmx4z_kube-system(149ffd71-7f07-4581-ab1a-ae1e649167c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xmx4z_kube-system(149ffd71-7f07-4581-ab1a-ae1e649167c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xmx4z" podUID="149ffd71-7f07-4581-ab1a-ae1e649167c5" Jan 13 21:23:45.423776 kubelet[2638]: E0113 21:23:45.423105 2638 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.423776 kubelet[2638]: E0113 21:23:45.423176 2638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b8b66ff96-wwr94" Jan 13 21:23:45.423776 kubelet[2638]: E0113 21:23:45.423214 2638 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b8b66ff96-wwr94" Jan 13 21:23:45.423981 kubelet[2638]: E0113 21:23:45.423315 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b8b66ff96-wwr94_calico-apiserver(f4e98d21-89de-4cfb-bd25-889f4d6587ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b8b66ff96-wwr94_calico-apiserver(f4e98d21-89de-4cfb-bd25-889f4d6587ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b8b66ff96-wwr94" podUID="f4e98d21-89de-4cfb-bd25-889f4d6587ef" Jan 13 21:23:45.426633 containerd[1476]: time="2025-01-13T21:23:45.426351705Z" level=error msg="Failed to destroy network for sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.427422 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64-shm.mount: Deactivated successfully. Jan 13 21:23:45.429667 containerd[1476]: time="2025-01-13T21:23:45.429471771Z" level=error msg="encountered an error cleaning up failed sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.431268 containerd[1476]: time="2025-01-13T21:23:45.430212067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jgmn7,Uid:47c77e8b-ecd7-41c6-9767-d6b9be1c4f20,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.431268 containerd[1476]: time="2025-01-13T21:23:45.430962965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:23:45.431846 kubelet[2638]: E0113 21:23:45.431565 2638 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.431846 kubelet[2638]: E0113 21:23:45.431609 2638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jgmn7" Jan 13 21:23:45.431846 kubelet[2638]: E0113 21:23:45.431640 2638 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jgmn7" Jan 13 21:23:45.432020 kubelet[2638]: E0113 21:23:45.431698 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-jgmn7_kube-system(47c77e8b-ecd7-41c6-9767-d6b9be1c4f20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-jgmn7_kube-system(47c77e8b-ecd7-41c6-9767-d6b9be1c4f20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jgmn7" podUID="47c77e8b-ecd7-41c6-9767-d6b9be1c4f20" Jan 13 21:23:45.434530 kubelet[2638]: I0113 21:23:45.434150 2638 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:23:45.435681 containerd[1476]: time="2025-01-13T21:23:45.435104121Z" level=info msg="StopPodSandbox for \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\"" Jan 13 21:23:45.435681 containerd[1476]: time="2025-01-13T21:23:45.435351397Z" level=info msg="Ensure that sandbox fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b in task-service has been cleanup successfully" Jan 13 21:23:45.439992 kubelet[2638]: I0113 21:23:45.439965 2638 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:23:45.441336 containerd[1476]: time="2025-01-13T21:23:45.440727486Z" level=info msg="StopPodSandbox for \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\"" Jan 13 21:23:45.441336 containerd[1476]: time="2025-01-13T21:23:45.440930747Z" level=info msg="Ensure that sandbox e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8 in task-service has been cleanup successfully" Jan 13 21:23:45.520663 containerd[1476]: time="2025-01-13T21:23:45.519491876Z" level=error msg="StopPodSandbox for \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\" failed" error="failed to destroy network for sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.520824 kubelet[2638]: E0113 21:23:45.519735 2638 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:23:45.520824 kubelet[2638]: E0113 21:23:45.519820 2638 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8"} Jan 13 21:23:45.520824 kubelet[2638]: E0113 21:23:45.519872 2638 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8743f2ec-4b00-4436-8e8d-31d6c433e17f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:23:45.520824 kubelet[2638]: E0113 21:23:45.519917 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8743f2ec-4b00-4436-8e8d-31d6c433e17f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b8b66ff96-zrfxq" podUID="8743f2ec-4b00-4436-8e8d-31d6c433e17f" Jan 13 21:23:45.522470 containerd[1476]: time="2025-01-13T21:23:45.522426323Z" level=error msg="StopPodSandbox for \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\" failed" error="failed to destroy network for sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:45.522923 kubelet[2638]: E0113 21:23:45.522742 2638 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:23:45.522923 kubelet[2638]: E0113 21:23:45.522779 2638 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b"} Jan 13 21:23:45.522923 kubelet[2638]: E0113 21:23:45.522832 2638 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea8ed1ad-5383-412b-b6fb-8d38f75a663b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:23:45.522923 kubelet[2638]: E0113 21:23:45.522892 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea8ed1ad-5383-412b-b6fb-8d38f75a663b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f864b4c96-r64tw" podUID="ea8ed1ad-5383-412b-b6fb-8d38f75a663b" Jan 13 21:23:46.220104 systemd[1]: Created slice kubepods-besteffort-pod13b69416_81ae_4717_b37c_e84f2bf2d81a.slice - libcontainer container kubepods-besteffort-pod13b69416_81ae_4717_b37c_e84f2bf2d81a.slice. Jan 13 21:23:46.236699 containerd[1476]: time="2025-01-13T21:23:46.236103505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dnbrj,Uid:13b69416-81ae-4717-b37c-e84f2bf2d81a,Namespace:calico-system,Attempt:0,}" Jan 13 21:23:46.325509 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c-shm.mount: Deactivated successfully. Jan 13 21:23:46.371764 containerd[1476]: time="2025-01-13T21:23:46.371709345Z" level=error msg="Failed to destroy network for sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:46.376706 containerd[1476]: time="2025-01-13T21:23:46.376654451Z" level=error msg="encountered an error cleaning up failed sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:46.376914 containerd[1476]: time="2025-01-13T21:23:46.376881385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dnbrj,Uid:13b69416-81ae-4717-b37c-e84f2bf2d81a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:46.377373 kubelet[2638]: E0113 21:23:46.377348 2638 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:46.381232 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e-shm.mount: Deactivated successfully. Jan 13 21:23:46.382118 kubelet[2638]: E0113 21:23:46.381340 2638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dnbrj" Jan 13 21:23:46.382118 kubelet[2638]: E0113 21:23:46.381386 2638 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dnbrj" Jan 13 21:23:46.382118 kubelet[2638]: E0113 21:23:46.381482 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dnbrj_calico-system(13b69416-81ae-4717-b37c-e84f2bf2d81a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dnbrj_calico-system(13b69416-81ae-4717-b37c-e84f2bf2d81a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dnbrj" podUID="13b69416-81ae-4717-b37c-e84f2bf2d81a" Jan 13 21:23:46.442046 kubelet[2638]: I0113 21:23:46.442013 2638 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:23:46.443777 containerd[1476]: time="2025-01-13T21:23:46.443730332Z" level=info msg="StopPodSandbox for \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\"" Jan 13 21:23:46.444024 containerd[1476]: time="2025-01-13T21:23:46.443980359Z" level=info msg="Ensure that sandbox dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c in task-service has been cleanup successfully" Jan 13 21:23:46.458615 kubelet[2638]: I0113 21:23:46.458590 2638 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:23:46.460398 containerd[1476]: time="2025-01-13T21:23:46.459839868Z" level=info msg="StopPodSandbox for \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\"" Jan 13 21:23:46.460398 containerd[1476]: time="2025-01-13T21:23:46.460047522Z" level=info msg="Ensure that sandbox 9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e in task-service has been cleanup successfully" Jan 13 21:23:46.470132 kubelet[2638]: I0113 21:23:46.470091 2638 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:23:46.471393 containerd[1476]: time="2025-01-13T21:23:46.470990770Z" level=info msg="StopPodSandbox for \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\"" Jan 13 21:23:46.472623 containerd[1476]: time="2025-01-13T21:23:46.471697631Z" level=info msg="Ensure that sandbox a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64 in task-service has been cleanup successfully" Jan 13 21:23:46.477750 kubelet[2638]: I0113 21:23:46.476586 2638 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:23:46.479112 containerd[1476]: time="2025-01-13T21:23:46.479082501Z" level=info msg="StopPodSandbox for \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\"" Jan 13 21:23:46.479866 containerd[1476]: time="2025-01-13T21:23:46.479835254Z" level=info msg="Ensure that sandbox 80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964 in task-service has been cleanup successfully" Jan 13 21:23:46.555534 containerd[1476]: time="2025-01-13T21:23:46.555476610Z" level=error msg="StopPodSandbox for \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\" failed" error="failed to destroy network for sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:46.556185 kubelet[2638]: E0113 21:23:46.555960 2638 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:23:46.556185 kubelet[2638]: E0113 21:23:46.556021 2638 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c"} Jan 13 21:23:46.556185 kubelet[2638]: E0113 21:23:46.556080 2638 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47c77e8b-ecd7-41c6-9767-d6b9be1c4f20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:23:46.556185 kubelet[2638]: E0113 21:23:46.556128 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47c77e8b-ecd7-41c6-9767-d6b9be1c4f20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jgmn7" podUID="47c77e8b-ecd7-41c6-9767-d6b9be1c4f20" Jan 13 21:23:46.568505 containerd[1476]: time="2025-01-13T21:23:46.568456032Z" level=error msg="StopPodSandbox for \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\" failed" error="failed to destroy network for sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:46.568928 kubelet[2638]: E0113 21:23:46.568903 2638 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:23:46.569093 kubelet[2638]: E0113 21:23:46.569077 2638 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964"} Jan 13 21:23:46.569269 kubelet[2638]: E0113 21:23:46.569251 2638 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"149ffd71-7f07-4581-ab1a-ae1e649167c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:23:46.569518 kubelet[2638]: E0113 21:23:46.569483 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"149ffd71-7f07-4581-ab1a-ae1e649167c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xmx4z" podUID="149ffd71-7f07-4581-ab1a-ae1e649167c5" Jan 13 21:23:46.588251 containerd[1476]: time="2025-01-13T21:23:46.587996979Z" level=error msg="StopPodSandbox for \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\" failed" error="failed to destroy network for sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:46.588852 containerd[1476]: time="2025-01-13T21:23:46.588443049Z" level=error msg="StopPodSandbox for \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\" failed" error="failed to destroy network for sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:23:46.588967 kubelet[2638]: E0113 21:23:46.588465 2638 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:23:46.588967 kubelet[2638]: E0113 21:23:46.588502 2638 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e"} Jan 13 21:23:46.588967 kubelet[2638]: E0113 21:23:46.588555 2638 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"13b69416-81ae-4717-b37c-e84f2bf2d81a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:23:46.588967 kubelet[2638]: E0113 21:23:46.588596 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"13b69416-81ae-4717-b37c-e84f2bf2d81a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dnbrj" podUID="13b69416-81ae-4717-b37c-e84f2bf2d81a" Jan 13 21:23:46.589426 kubelet[2638]: E0113 21:23:46.588688 2638 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:23:46.589426 kubelet[2638]: E0113 21:23:46.588728 2638 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64"} Jan 13 21:23:46.589426 kubelet[2638]: E0113 21:23:46.588779 2638 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f4e98d21-89de-4cfb-bd25-889f4d6587ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:23:46.589426 kubelet[2638]: E0113 21:23:46.588829 2638 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f4e98d21-89de-4cfb-bd25-889f4d6587ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b8b66ff96-wwr94" podUID="f4e98d21-89de-4cfb-bd25-889f4d6587ef" Jan 13 21:23:51.903441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070042781.mount: Deactivated successfully. Jan 13 21:23:51.944698 containerd[1476]: time="2025-01-13T21:23:51.944637172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:51.945912 containerd[1476]: time="2025-01-13T21:23:51.945862228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:23:51.946790 containerd[1476]: time="2025-01-13T21:23:51.946708612Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:51.951411 containerd[1476]: time="2025-01-13T21:23:51.951209018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:51.952539 containerd[1476]: time="2025-01-13T21:23:51.952360108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.521357785s" Jan 13 21:23:51.952539 containerd[1476]: time="2025-01-13T21:23:51.952409157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:23:51.970553 containerd[1476]: time="2025-01-13T21:23:51.970116624Z" level=info msg="CreateContainer within sandbox \"78d33da9a4ff8a5a2df72dc63bd5f5a1ec353eee0cd314ecd31ed29bbd80e543\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:23:52.006640 containerd[1476]: time="2025-01-13T21:23:52.006549275Z" level=info msg="CreateContainer within sandbox \"78d33da9a4ff8a5a2df72dc63bd5f5a1ec353eee0cd314ecd31ed29bbd80e543\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"18f6c430d36240205a2af7843ef2242da11468f0047dbc6a855f8822a92c2e53\"" Jan 13 21:23:52.008991 containerd[1476]: time="2025-01-13T21:23:52.007267011Z" level=info msg="StartContainer for \"18f6c430d36240205a2af7843ef2242da11468f0047dbc6a855f8822a92c2e53\"" Jan 13 21:23:52.044496 systemd[1]: Started cri-containerd-18f6c430d36240205a2af7843ef2242da11468f0047dbc6a855f8822a92c2e53.scope - libcontainer container 18f6c430d36240205a2af7843ef2242da11468f0047dbc6a855f8822a92c2e53. Jan 13 21:23:52.081066 containerd[1476]: time="2025-01-13T21:23:52.080901811Z" level=info msg="StartContainer for \"18f6c430d36240205a2af7843ef2242da11468f0047dbc6a855f8822a92c2e53\" returns successfully" Jan 13 21:23:52.182619 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:23:52.182765 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:23:53.551133 systemd[1]: run-containerd-runc-k8s.io-18f6c430d36240205a2af7843ef2242da11468f0047dbc6a855f8822a92c2e53-runc.SFUjbg.mount: Deactivated successfully. Jan 13 21:23:56.272967 kubelet[2638]: I0113 21:23:56.272554 2638 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:23:56.289200 kubelet[2638]: I0113 21:23:56.288060 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-wcbgw" podStartSLOduration=5.689625045 podStartE2EDuration="22.288007101s" podCreationTimestamp="2025-01-13 21:23:34 +0000 UTC" firstStartedPulling="2025-01-13 21:23:35.354486114 +0000 UTC m=+20.351705054" lastFinishedPulling="2025-01-13 21:23:51.952868176 +0000 UTC m=+36.950087110" observedRunningTime="2025-01-13 21:23:52.523104218 +0000 UTC m=+37.520323186" watchObservedRunningTime="2025-01-13 21:23:56.288007101 +0000 UTC m=+41.285226116" Jan 13 21:23:57.061348 kernel: bpftool[3962]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:23:57.546218 systemd-networkd[1386]: vxlan.calico: Link UP Jan 13 21:23:57.546234 systemd-networkd[1386]: vxlan.calico: Gained carrier Jan 13 21:23:58.212426 containerd[1476]: time="2025-01-13T21:23:58.211088002Z" level=info msg="StopPodSandbox for \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\"" Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.270 [INFO][4082] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.270 [INFO][4082] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" iface="eth0" netns="/var/run/netns/cni-7a059baa-013c-c3f4-46cc-4ddc7533a7a0" Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.271 [INFO][4082] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" iface="eth0" netns="/var/run/netns/cni-7a059baa-013c-c3f4-46cc-4ddc7533a7a0" Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.272 [INFO][4082] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" iface="eth0" netns="/var/run/netns/cni-7a059baa-013c-c3f4-46cc-4ddc7533a7a0" Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.272 [INFO][4082] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.272 [INFO][4082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.297 [INFO][4088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" HandleID="k8s-pod-network.fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.297 [INFO][4088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.297 [INFO][4088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.307 [WARNING][4088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" HandleID="k8s-pod-network.fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.307 [INFO][4088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" HandleID="k8s-pod-network.fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.309 [INFO][4088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:58.313238 containerd[1476]: 2025-01-13 21:23:58.311 [INFO][4082] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:23:58.314512 containerd[1476]: time="2025-01-13T21:23:58.314255141Z" level=info msg="TearDown network for sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\" successfully" Jan 13 21:23:58.314512 containerd[1476]: time="2025-01-13T21:23:58.314320367Z" level=info msg="StopPodSandbox for \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\" returns successfully" Jan 13 21:23:58.318003 containerd[1476]: time="2025-01-13T21:23:58.317094542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f864b4c96-r64tw,Uid:ea8ed1ad-5383-412b-b6fb-8d38f75a663b,Namespace:calico-system,Attempt:1,}" Jan 13 21:23:58.320436 systemd[1]: run-netns-cni\x2d7a059baa\x2d013c\x2dc3f4\x2d46cc\x2d4ddc7533a7a0.mount: Deactivated successfully. Jan 13 21:23:58.465366 systemd-networkd[1386]: calie42bdd2b536: Link UP Jan 13 21:23:58.467649 systemd-networkd[1386]: calie42bdd2b536: Gained carrier Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.382 [INFO][4095] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0 calico-kube-controllers-6f864b4c96- calico-system ea8ed1ad-5383-412b-b6fb-8d38f75a663b 740 0 2025-01-13 21:23:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f864b4c96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal calico-kube-controllers-6f864b4c96-r64tw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie42bdd2b536 [] []}} ContainerID="8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" Namespace="calico-system" Pod="calico-kube-controllers-6f864b4c96-r64tw" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.382 [INFO][4095] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" Namespace="calico-system" Pod="calico-kube-controllers-6f864b4c96-r64tw" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.418 [INFO][4105] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" HandleID="k8s-pod-network.8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.428 [INFO][4105] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" HandleID="k8s-pod-network.8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", "pod":"calico-kube-controllers-6f864b4c96-r64tw", "timestamp":"2025-01-13 21:23:58.418879946 +0000 UTC"}, Hostname:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.428 [INFO][4105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.429 [INFO][4105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.429 [INFO][4105] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal' Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.430 [INFO][4105] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.435 [INFO][4105] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.440 [INFO][4105] ipam/ipam.go 489: Trying affinity for 192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.442 [INFO][4105] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.444 [INFO][4105] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.444 [INFO][4105] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.446 [INFO][4105] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306 Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.452 [INFO][4105] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.458 [INFO][4105] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.65/26] block=192.168.82.64/26 handle="k8s-pod-network.8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.459 [INFO][4105] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.65/26] handle="k8s-pod-network.8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.459 [INFO][4105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:58.496324 containerd[1476]: 2025-01-13 21:23:58.459 [INFO][4105] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.65/26] IPv6=[] ContainerID="8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" HandleID="k8s-pod-network.8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:23:58.497455 containerd[1476]: 2025-01-13 21:23:58.461 [INFO][4095] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" Namespace="calico-system" Pod="calico-kube-controllers-6f864b4c96-r64tw" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0", GenerateName:"calico-kube-controllers-6f864b4c96-", Namespace:"calico-system", SelfLink:"", UID:"ea8ed1ad-5383-412b-b6fb-8d38f75a663b", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f864b4c96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-6f864b4c96-r64tw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie42bdd2b536", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:58.497455 containerd[1476]: 2025-01-13 21:23:58.461 [INFO][4095] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.65/32] ContainerID="8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" Namespace="calico-system" Pod="calico-kube-controllers-6f864b4c96-r64tw" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:23:58.497455 containerd[1476]: 2025-01-13 21:23:58.461 [INFO][4095] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie42bdd2b536 ContainerID="8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" Namespace="calico-system" Pod="calico-kube-controllers-6f864b4c96-r64tw" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:23:58.497455 containerd[1476]: 2025-01-13 21:23:58.464 [INFO][4095] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" Namespace="calico-system" Pod="calico-kube-controllers-6f864b4c96-r64tw" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:23:58.497455 containerd[1476]: 2025-01-13 21:23:58.465 [INFO][4095] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" Namespace="calico-system" Pod="calico-kube-controllers-6f864b4c96-r64tw" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0", GenerateName:"calico-kube-controllers-6f864b4c96-", Namespace:"calico-system", SelfLink:"", UID:"ea8ed1ad-5383-412b-b6fb-8d38f75a663b", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f864b4c96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306", Pod:"calico-kube-controllers-6f864b4c96-r64tw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie42bdd2b536", MAC:"46:8c:aa:b1:ea:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:58.497455 containerd[1476]: 2025-01-13 21:23:58.489 [INFO][4095] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306" Namespace="calico-system" Pod="calico-kube-controllers-6f864b4c96-r64tw" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:23:58.534501 containerd[1476]: time="2025-01-13T21:23:58.534280457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:58.534501 containerd[1476]: time="2025-01-13T21:23:58.534417417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:58.534501 containerd[1476]: time="2025-01-13T21:23:58.534437361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:58.535018 containerd[1476]: time="2025-01-13T21:23:58.534547403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:58.572475 systemd[1]: Started cri-containerd-8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306.scope - libcontainer container 8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306. Jan 13 21:23:58.630007 containerd[1476]: time="2025-01-13T21:23:58.629928929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f864b4c96-r64tw,Uid:ea8ed1ad-5383-412b-b6fb-8d38f75a663b,Namespace:calico-system,Attempt:1,} returns sandbox id \"8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306\"" Jan 13 21:23:58.633686 containerd[1476]: time="2025-01-13T21:23:58.633652419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:23:59.001499 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Jan 13 21:23:59.212464 containerd[1476]: time="2025-01-13T21:23:59.211980113Z" level=info msg="StopPodSandbox for \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\"" Jan 13 21:23:59.214743 containerd[1476]: time="2025-01-13T21:23:59.212920987Z" level=info msg="StopPodSandbox for \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\"" Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.309 [INFO][4194] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.310 [INFO][4194] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" iface="eth0" netns="/var/run/netns/cni-bb4c3498-5a8c-a707-ce5e-03d59ff7c254" Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.319 [INFO][4194] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" iface="eth0" netns="/var/run/netns/cni-bb4c3498-5a8c-a707-ce5e-03d59ff7c254" Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.321 [INFO][4194] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" iface="eth0" netns="/var/run/netns/cni-bb4c3498-5a8c-a707-ce5e-03d59ff7c254" Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.321 [INFO][4194] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.321 [INFO][4194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.361 [INFO][4211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" HandleID="k8s-pod-network.a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.361 [INFO][4211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.361 [INFO][4211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.370 [WARNING][4211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" HandleID="k8s-pod-network.a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.370 [INFO][4211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" HandleID="k8s-pod-network.a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.372 [INFO][4211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:59.379369 containerd[1476]: 2025-01-13 21:23:59.373 [INFO][4194] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:23:59.380478 containerd[1476]: time="2025-01-13T21:23:59.380265782Z" level=info msg="TearDown network for sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\" successfully" Jan 13 21:23:59.380478 containerd[1476]: time="2025-01-13T21:23:59.380345002Z" level=info msg="StopPodSandbox for \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\" returns successfully" Jan 13 21:23:59.384432 containerd[1476]: time="2025-01-13T21:23:59.384385947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b8b66ff96-wwr94,Uid:f4e98d21-89de-4cfb-bd25-889f4d6587ef,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:23:59.385710 systemd[1]: run-netns-cni\x2dbb4c3498\x2d5a8c\x2da707\x2dce5e\x2d03d59ff7c254.mount: Deactivated successfully. Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.314 [INFO][4193] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.314 [INFO][4193] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" iface="eth0" netns="/var/run/netns/cni-42cd72a6-1628-41fe-9e37-37260f28996b" Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.314 [INFO][4193] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" iface="eth0" netns="/var/run/netns/cni-42cd72a6-1628-41fe-9e37-37260f28996b" Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.315 [INFO][4193] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" iface="eth0" netns="/var/run/netns/cni-42cd72a6-1628-41fe-9e37-37260f28996b" Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.315 [INFO][4193] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.315 [INFO][4193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.363 [INFO][4207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" HandleID="k8s-pod-network.e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.363 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.372 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.388 [WARNING][4207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" HandleID="k8s-pod-network.e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.388 [INFO][4207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" HandleID="k8s-pod-network.e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.390 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:59.396582 containerd[1476]: 2025-01-13 21:23:59.394 [INFO][4193] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:23:59.399313 containerd[1476]: time="2025-01-13T21:23:59.396884348Z" level=info msg="TearDown network for sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\" successfully" Jan 13 21:23:59.399313 containerd[1476]: time="2025-01-13T21:23:59.396918871Z" level=info msg="StopPodSandbox for \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\" returns successfully" Jan 13 21:23:59.399313 containerd[1476]: time="2025-01-13T21:23:59.398659280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b8b66ff96-zrfxq,Uid:8743f2ec-4b00-4436-8e8d-31d6c433e17f,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:23:59.403569 systemd[1]: run-netns-cni\x2d42cd72a6\x2d1628\x2d41fe\x2d9e37\x2d37260f28996b.mount: Deactivated successfully. Jan 13 21:23:59.684407 systemd-networkd[1386]: calief0d75657fe: Link UP Jan 13 21:23:59.691488 systemd-networkd[1386]: calief0d75657fe: Gained carrier Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.508 [INFO][4221] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0 calico-apiserver-b8b66ff96- calico-apiserver f4e98d21-89de-4cfb-bd25-889f4d6587ef 749 0 2025-01-13 21:23:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b8b66ff96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal calico-apiserver-b8b66ff96-wwr94 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calief0d75657fe [] []}} ContainerID="08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-wwr94" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.509 [INFO][4221] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-wwr94" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.584 [INFO][4244] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" HandleID="k8s-pod-network.08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.610 [INFO][4244] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" HandleID="k8s-pod-network.08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030f7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", "pod":"calico-apiserver-b8b66ff96-wwr94", "timestamp":"2025-01-13 21:23:59.584971203 +0000 UTC"}, Hostname:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.610 [INFO][4244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.611 [INFO][4244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.612 [INFO][4244] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal' Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.615 [INFO][4244] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.624 [INFO][4244] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.636 [INFO][4244] ipam/ipam.go 489: Trying affinity for 192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.639 [INFO][4244] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.642 [INFO][4244] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.642 [INFO][4244] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.644 [INFO][4244] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920 Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.652 [INFO][4244] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.671 [INFO][4244] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.66/26] block=192.168.82.64/26 handle="k8s-pod-network.08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.672 [INFO][4244] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.66/26] handle="k8s-pod-network.08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.672 [INFO][4244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:59.721275 containerd[1476]: 2025-01-13 21:23:59.672 [INFO][4244] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.66/26] IPv6=[] ContainerID="08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" HandleID="k8s-pod-network.08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:23:59.722447 containerd[1476]: 2025-01-13 21:23:59.677 [INFO][4221] cni-plugin/k8s.go 386: Populated endpoint ContainerID="08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-wwr94" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0", GenerateName:"calico-apiserver-b8b66ff96-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4e98d21-89de-4cfb-bd25-889f4d6587ef", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b8b66ff96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-b8b66ff96-wwr94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief0d75657fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:59.722447 containerd[1476]: 2025-01-13 21:23:59.677 [INFO][4221] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.66/32] ContainerID="08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-wwr94" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:23:59.722447 containerd[1476]: 2025-01-13 21:23:59.677 [INFO][4221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief0d75657fe ContainerID="08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-wwr94" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:23:59.722447 containerd[1476]: 2025-01-13 21:23:59.692 [INFO][4221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-wwr94" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:23:59.722447 containerd[1476]: 2025-01-13 21:23:59.693 [INFO][4221] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-wwr94" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0", GenerateName:"calico-apiserver-b8b66ff96-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4e98d21-89de-4cfb-bd25-889f4d6587ef", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b8b66ff96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920", Pod:"calico-apiserver-b8b66ff96-wwr94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief0d75657fe", MAC:"1e:43:da:48:41:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:59.722447 containerd[1476]: 2025-01-13 21:23:59.718 [INFO][4221] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-wwr94" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:23:59.782319 containerd[1476]: time="2025-01-13T21:23:59.780394389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:59.782319 containerd[1476]: time="2025-01-13T21:23:59.780828050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:59.782319 containerd[1476]: time="2025-01-13T21:23:59.780882824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:59.782319 containerd[1476]: time="2025-01-13T21:23:59.781039618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:59.803870 systemd-networkd[1386]: calia6ea88aa65f: Link UP Jan 13 21:23:59.806339 systemd-networkd[1386]: calia6ea88aa65f: Gained carrier Jan 13 21:23:59.848541 systemd[1]: Started cri-containerd-08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920.scope - libcontainer container 08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920. Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.544 [INFO][4230] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0 calico-apiserver-b8b66ff96- calico-apiserver 8743f2ec-4b00-4436-8e8d-31d6c433e17f 750 0 2025-01-13 21:23:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b8b66ff96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal calico-apiserver-b8b66ff96-zrfxq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia6ea88aa65f [] []}} ContainerID="c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-zrfxq" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.544 [INFO][4230] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-zrfxq" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.650 [INFO][4248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" HandleID="k8s-pod-network.c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.672 [INFO][4248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" HandleID="k8s-pod-network.c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000508e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", "pod":"calico-apiserver-b8b66ff96-zrfxq", "timestamp":"2025-01-13 21:23:59.650385502 +0000 UTC"}, Hostname:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.672 [INFO][4248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.673 [INFO][4248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.673 [INFO][4248] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal' Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.676 [INFO][4248] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.688 [INFO][4248] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.707 [INFO][4248] ipam/ipam.go 489: Trying affinity for 192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.714 [INFO][4248] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.729 [INFO][4248] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.731 [INFO][4248] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.737 [INFO][4248] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9 Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.749 [INFO][4248] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.765 [INFO][4248] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.67/26] block=192.168.82.64/26 handle="k8s-pod-network.c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.765 [INFO][4248] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.67/26] handle="k8s-pod-network.c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.765 [INFO][4248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:23:59.853872 containerd[1476]: 2025-01-13 21:23:59.765 [INFO][4248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.67/26] IPv6=[] ContainerID="c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" HandleID="k8s-pod-network.c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:23:59.857815 containerd[1476]: 2025-01-13 21:23:59.788 [INFO][4230] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-zrfxq" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0", GenerateName:"calico-apiserver-b8b66ff96-", Namespace:"calico-apiserver", SelfLink:"", UID:"8743f2ec-4b00-4436-8e8d-31d6c433e17f", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b8b66ff96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-b8b66ff96-zrfxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6ea88aa65f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:59.857815 containerd[1476]: 2025-01-13 21:23:59.793 [INFO][4230] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.67/32] ContainerID="c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-zrfxq" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:23:59.857815 containerd[1476]: 2025-01-13 21:23:59.793 [INFO][4230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6ea88aa65f ContainerID="c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-zrfxq" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:23:59.857815 containerd[1476]: 2025-01-13 21:23:59.807 [INFO][4230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-zrfxq" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:23:59.857815 containerd[1476]: 2025-01-13 21:23:59.808 [INFO][4230] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-zrfxq" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0", GenerateName:"calico-apiserver-b8b66ff96-", Namespace:"calico-apiserver", SelfLink:"", UID:"8743f2ec-4b00-4436-8e8d-31d6c433e17f", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b8b66ff96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9", Pod:"calico-apiserver-b8b66ff96-zrfxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6ea88aa65f", MAC:"1a:41:42:55:fe:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:23:59.857815 containerd[1476]: 2025-01-13 21:23:59.844 [INFO][4230] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9" Namespace="calico-apiserver" Pod="calico-apiserver-b8b66ff96-zrfxq" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:23:59.952482 containerd[1476]: time="2025-01-13T21:23:59.949806081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:59.952482 containerd[1476]: time="2025-01-13T21:23:59.949889146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:59.952482 containerd[1476]: time="2025-01-13T21:23:59.949920427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:59.952482 containerd[1476]: time="2025-01-13T21:23:59.950050289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:59.977789 containerd[1476]: time="2025-01-13T21:23:59.977737987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b8b66ff96-wwr94,Uid:f4e98d21-89de-4cfb-bd25-889f4d6587ef,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920\"" Jan 13 21:23:59.995569 systemd[1]: Started cri-containerd-c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9.scope - libcontainer container c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9. Jan 13 21:24:00.067239 containerd[1476]: time="2025-01-13T21:24:00.067194433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b8b66ff96-zrfxq,Uid:8743f2ec-4b00-4436-8e8d-31d6c433e17f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9\"" Jan 13 21:24:00.212534 containerd[1476]: time="2025-01-13T21:24:00.210957903Z" level=info msg="StopPodSandbox for \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\"" Jan 13 21:24:00.347375 systemd-networkd[1386]: calie42bdd2b536: Gained IPv6LL Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.300 [INFO][4384] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.300 [INFO][4384] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" iface="eth0" netns="/var/run/netns/cni-af4513a6-a19e-b4f1-d6b5-615a3c384787" Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.300 [INFO][4384] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" iface="eth0" netns="/var/run/netns/cni-af4513a6-a19e-b4f1-d6b5-615a3c384787" Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.301 [INFO][4384] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" iface="eth0" netns="/var/run/netns/cni-af4513a6-a19e-b4f1-d6b5-615a3c384787" Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.301 [INFO][4384] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.301 [INFO][4384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.365 [INFO][4390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" HandleID="k8s-pod-network.9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.365 [INFO][4390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.365 [INFO][4390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.376 [WARNING][4390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" HandleID="k8s-pod-network.9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.376 [INFO][4390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" HandleID="k8s-pod-network.9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.393 [INFO][4390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:00.403480 containerd[1476]: 2025-01-13 21:24:00.401 [INFO][4384] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:00.407173 containerd[1476]: time="2025-01-13T21:24:00.405502474Z" level=info msg="TearDown network for sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\" successfully" Jan 13 21:24:00.407173 containerd[1476]: time="2025-01-13T21:24:00.405542163Z" level=info msg="StopPodSandbox for \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\" returns successfully" Jan 13 21:24:00.407980 containerd[1476]: time="2025-01-13T21:24:00.407932138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dnbrj,Uid:13b69416-81ae-4717-b37c-e84f2bf2d81a,Namespace:calico-system,Attempt:1,}" Jan 13 21:24:00.415098 systemd[1]: run-netns-cni\x2daf4513a6\x2da19e\x2db4f1\x2dd6b5\x2d615a3c384787.mount: Deactivated successfully. Jan 13 21:24:00.718469 systemd-networkd[1386]: cali63b99b6948b: Link UP Jan 13 21:24:00.718805 systemd-networkd[1386]: cali63b99b6948b: Gained carrier Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.567 [INFO][4397] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0 csi-node-driver- calico-system 13b69416-81ae-4717-b37c-e84f2bf2d81a 761 0 2025-01-13 21:23:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal csi-node-driver-dnbrj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali63b99b6948b [] []}} ContainerID="9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" Namespace="calico-system" Pod="csi-node-driver-dnbrj" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.567 [INFO][4397] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" Namespace="calico-system" Pod="csi-node-driver-dnbrj" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.639 [INFO][4407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" HandleID="k8s-pod-network.9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.655 [INFO][4407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" HandleID="k8s-pod-network.9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011c450), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", "pod":"csi-node-driver-dnbrj", "timestamp":"2025-01-13 21:24:00.639006265 +0000 UTC"}, Hostname:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.655 [INFO][4407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.655 [INFO][4407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.655 [INFO][4407] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal' Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.658 [INFO][4407] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.665 [INFO][4407] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.674 [INFO][4407] ipam/ipam.go 489: Trying affinity for 192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.678 [INFO][4407] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.683 [INFO][4407] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.683 [INFO][4407] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.686 [INFO][4407] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795 Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.695 [INFO][4407] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.707 [INFO][4407] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.68/26] block=192.168.82.64/26 handle="k8s-pod-network.9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.707 [INFO][4407] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.68/26] handle="k8s-pod-network.9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.707 [INFO][4407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:00.744525 containerd[1476]: 2025-01-13 21:24:00.707 [INFO][4407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.68/26] IPv6=[] ContainerID="9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" HandleID="k8s-pod-network.9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:00.747514 containerd[1476]: 2025-01-13 21:24:00.711 [INFO][4397] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" Namespace="calico-system" Pod="csi-node-driver-dnbrj" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13b69416-81ae-4717-b37c-e84f2bf2d81a", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-dnbrj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63b99b6948b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:00.747514 containerd[1476]: 2025-01-13 21:24:00.711 [INFO][4397] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.68/32] ContainerID="9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" Namespace="calico-system" Pod="csi-node-driver-dnbrj" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:00.747514 containerd[1476]: 2025-01-13 21:24:00.711 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63b99b6948b ContainerID="9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" Namespace="calico-system" Pod="csi-node-driver-dnbrj" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:00.747514 containerd[1476]: 2025-01-13 21:24:00.718 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" Namespace="calico-system" Pod="csi-node-driver-dnbrj" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:00.747514 containerd[1476]: 2025-01-13 21:24:00.719 [INFO][4397] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" Namespace="calico-system" Pod="csi-node-driver-dnbrj" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13b69416-81ae-4717-b37c-e84f2bf2d81a", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795", Pod:"csi-node-driver-dnbrj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63b99b6948b", MAC:"4a:fd:72:66:6a:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:00.747514 containerd[1476]: 2025-01-13 21:24:00.740 [INFO][4397] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795" Namespace="calico-system" Pod="csi-node-driver-dnbrj" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:00.807505 containerd[1476]: time="2025-01-13T21:24:00.807391157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:24:00.807505 containerd[1476]: time="2025-01-13T21:24:00.807462477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:24:00.808952 containerd[1476]: time="2025-01-13T21:24:00.808046669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:00.809894 containerd[1476]: time="2025-01-13T21:24:00.809347509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:00.853497 systemd[1]: Started cri-containerd-9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795.scope - libcontainer container 9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795. Jan 13 21:24:00.898253 containerd[1476]: time="2025-01-13T21:24:00.898206991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dnbrj,Uid:13b69416-81ae-4717-b37c-e84f2bf2d81a,Namespace:calico-system,Attempt:1,} returns sandbox id \"9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795\"" Jan 13 21:24:01.114184 systemd-networkd[1386]: calia6ea88aa65f: Gained IPv6LL Jan 13 21:24:01.209612 containerd[1476]: time="2025-01-13T21:24:01.209546481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:01.212956 containerd[1476]: time="2025-01-13T21:24:01.212494466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:24:01.212956 containerd[1476]: time="2025-01-13T21:24:01.212670257Z" level=info msg="StopPodSandbox for \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\"" Jan 13 21:24:01.215540 containerd[1476]: time="2025-01-13T21:24:01.215276409Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:01.221524 containerd[1476]: time="2025-01-13T21:24:01.221255708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:01.223673 containerd[1476]: time="2025-01-13T21:24:01.223352327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.589453311s" Jan 13 21:24:01.223673 containerd[1476]: time="2025-01-13T21:24:01.223398721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:24:01.225508 containerd[1476]: time="2025-01-13T21:24:01.223852084Z" level=info msg="StopPodSandbox for \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\"" Jan 13 21:24:01.233016 containerd[1476]: time="2025-01-13T21:24:01.232982565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:24:01.240139 containerd[1476]: time="2025-01-13T21:24:01.240010840Z" level=info msg="CreateContainer within sandbox \"8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:24:01.268004 containerd[1476]: time="2025-01-13T21:24:01.267954389Z" level=info msg="CreateContainer within sandbox \"8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b17384fabb343f84ca39f37bf71b93c2131926d5fcd2f401c29cc2254cd8a8da\"" Jan 13 21:24:01.269501 containerd[1476]: time="2025-01-13T21:24:01.269354442Z" level=info msg="StartContainer for \"b17384fabb343f84ca39f37bf71b93c2131926d5fcd2f401c29cc2254cd8a8da\"" Jan 13 21:24:01.369592 systemd-networkd[1386]: calief0d75657fe: Gained IPv6LL Jan 13 21:24:01.371250 systemd[1]: Started cri-containerd-b17384fabb343f84ca39f37bf71b93c2131926d5fcd2f401c29cc2254cd8a8da.scope - libcontainer container b17384fabb343f84ca39f37bf71b93c2131926d5fcd2f401c29cc2254cd8a8da. Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.367 [INFO][4496] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.368 [INFO][4496] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" iface="eth0" netns="/var/run/netns/cni-91d03897-debd-cdff-fce0-58330b998fe0" Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.368 [INFO][4496] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" iface="eth0" netns="/var/run/netns/cni-91d03897-debd-cdff-fce0-58330b998fe0" Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.374 [INFO][4496] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" iface="eth0" netns="/var/run/netns/cni-91d03897-debd-cdff-fce0-58330b998fe0" Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.374 [INFO][4496] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.374 [INFO][4496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.443 [INFO][4528] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" HandleID="k8s-pod-network.dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.443 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.443 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.456 [WARNING][4528] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" HandleID="k8s-pod-network.dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.456 [INFO][4528] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" HandleID="k8s-pod-network.dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.458 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:01.463564 containerd[1476]: 2025-01-13 21:24:01.460 [INFO][4496] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:01.465677 containerd[1476]: time="2025-01-13T21:24:01.464799070Z" level=info msg="TearDown network for sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\" successfully" Jan 13 21:24:01.465677 containerd[1476]: time="2025-01-13T21:24:01.464835892Z" level=info msg="StopPodSandbox for \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\" returns successfully" Jan 13 21:24:01.466712 containerd[1476]: time="2025-01-13T21:24:01.466658931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jgmn7,Uid:47c77e8b-ecd7-41c6-9767-d6b9be1c4f20,Namespace:kube-system,Attempt:1,}" Jan 13 21:24:01.475673 systemd[1]: run-netns-cni\x2d91d03897\x2ddebd\x2dcdff\x2dfce0\x2d58330b998fe0.mount: Deactivated successfully. Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.361 [INFO][4497] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.368 [INFO][4497] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" iface="eth0" netns="/var/run/netns/cni-1c9dcdae-d3fd-a6c0-60d1-e51b7f85c1c2" Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.368 [INFO][4497] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" iface="eth0" netns="/var/run/netns/cni-1c9dcdae-d3fd-a6c0-60d1-e51b7f85c1c2" Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.373 [INFO][4497] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" iface="eth0" netns="/var/run/netns/cni-1c9dcdae-d3fd-a6c0-60d1-e51b7f85c1c2" Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.373 [INFO][4497] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.374 [INFO][4497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.450 [INFO][4531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" HandleID="k8s-pod-network.80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.450 [INFO][4531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.459 [INFO][4531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.482 [WARNING][4531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" HandleID="k8s-pod-network.80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.482 [INFO][4531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" HandleID="k8s-pod-network.80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.487 [INFO][4531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:01.508508 containerd[1476]: 2025-01-13 21:24:01.506 [INFO][4497] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:01.511760 containerd[1476]: time="2025-01-13T21:24:01.510859649Z" level=info msg="TearDown network for sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\" successfully" Jan 13 21:24:01.511760 containerd[1476]: time="2025-01-13T21:24:01.510942156Z" level=info msg="StopPodSandbox for \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\" returns successfully" Jan 13 21:24:01.513638 containerd[1476]: time="2025-01-13T21:24:01.512201740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xmx4z,Uid:149ffd71-7f07-4581-ab1a-ae1e649167c5,Namespace:kube-system,Attempt:1,}" Jan 13 21:24:01.522774 systemd[1]: run-netns-cni\x2d1c9dcdae\x2dd3fd\x2da6c0\x2d60d1\x2de51b7f85c1c2.mount: Deactivated successfully. Jan 13 21:24:01.572115 containerd[1476]: time="2025-01-13T21:24:01.571613494Z" level=info msg="StartContainer for \"b17384fabb343f84ca39f37bf71b93c2131926d5fcd2f401c29cc2254cd8a8da\" returns successfully" Jan 13 21:24:01.823639 kubelet[2638]: I0113 21:24:01.823594 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f864b4c96-r64tw" podStartSLOduration=24.231581374 podStartE2EDuration="26.823531751s" podCreationTimestamp="2025-01-13 21:23:35 +0000 UTC" firstStartedPulling="2025-01-13 21:23:58.632467171 +0000 UTC m=+43.629686118" lastFinishedPulling="2025-01-13 21:24:01.224417563 +0000 UTC m=+46.221636495" observedRunningTime="2025-01-13 21:24:01.613493929 +0000 UTC m=+46.610712886" watchObservedRunningTime="2025-01-13 21:24:01.823531751 +0000 UTC m=+46.820750682" Jan 13 21:24:01.835972 systemd-networkd[1386]: cali7f9a9fe6b46: Link UP Jan 13 21:24:01.838232 systemd-networkd[1386]: cali7f9a9fe6b46: Gained carrier Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.623 [INFO][4548] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0 coredns-76f75df574- kube-system 47c77e8b-ecd7-41c6-9767-d6b9be1c4f20 773 0 2025-01-13 21:23:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal coredns-76f75df574-jgmn7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7f9a9fe6b46 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" Namespace="kube-system" Pod="coredns-76f75df574-jgmn7" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.623 [INFO][4548] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" Namespace="kube-system" Pod="coredns-76f75df574-jgmn7" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.700 [INFO][4593] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" HandleID="k8s-pod-network.fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.725 [INFO][4593] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" HandleID="k8s-pod-network.fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051cf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", "pod":"coredns-76f75df574-jgmn7", "timestamp":"2025-01-13 21:24:01.69996524 +0000 UTC"}, Hostname:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.731 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.731 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.731 [INFO][4593] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal' Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.739 [INFO][4593] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.751 [INFO][4593] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.766 [INFO][4593] ipam/ipam.go 489: Trying affinity for 192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.773 [INFO][4593] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.783 [INFO][4593] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.783 [INFO][4593] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.787 [INFO][4593] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629 Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.797 [INFO][4593] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.814 [INFO][4593] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.69/26] block=192.168.82.64/26 handle="k8s-pod-network.fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.814 [INFO][4593] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.69/26] handle="k8s-pod-network.fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.814 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:01.870335 containerd[1476]: 2025-01-13 21:24:01.814 [INFO][4593] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.69/26] IPv6=[] ContainerID="fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" HandleID="k8s-pod-network.fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:01.873947 containerd[1476]: 2025-01-13 21:24:01.827 [INFO][4548] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" Namespace="kube-system" Pod="coredns-76f75df574-jgmn7" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"47c77e8b-ecd7-41c6-9767-d6b9be1c4f20", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-jgmn7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f9a9fe6b46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:01.873947 containerd[1476]: 2025-01-13 21:24:01.828 [INFO][4548] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.69/32] ContainerID="fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" Namespace="kube-system" Pod="coredns-76f75df574-jgmn7" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:01.873947 containerd[1476]: 2025-01-13 21:24:01.828 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f9a9fe6b46 ContainerID="fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" Namespace="kube-system" Pod="coredns-76f75df574-jgmn7" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:01.873947 containerd[1476]: 2025-01-13 21:24:01.833 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" Namespace="kube-system" Pod="coredns-76f75df574-jgmn7" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:01.873947 containerd[1476]: 2025-01-13 21:24:01.834 [INFO][4548] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" Namespace="kube-system" Pod="coredns-76f75df574-jgmn7" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"47c77e8b-ecd7-41c6-9767-d6b9be1c4f20", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629", Pod:"coredns-76f75df574-jgmn7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f9a9fe6b46", MAC:"2e:e3:3e:86:83:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:01.873947 containerd[1476]: 2025-01-13 21:24:01.865 [INFO][4548] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629" Namespace="kube-system" Pod="coredns-76f75df574-jgmn7" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:01.932008 containerd[1476]: time="2025-01-13T21:24:01.931044514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:24:01.932008 containerd[1476]: time="2025-01-13T21:24:01.931142405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:24:01.932008 containerd[1476]: time="2025-01-13T21:24:01.931168708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:01.932008 containerd[1476]: time="2025-01-13T21:24:01.931338262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:01.948937 systemd-networkd[1386]: cali014caca9130: Link UP Jan 13 21:24:01.951627 systemd-networkd[1386]: cali014caca9130: Gained carrier Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.695 [INFO][4563] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0 coredns-76f75df574- kube-system 149ffd71-7f07-4581-ab1a-ae1e649167c5 772 0 2025-01-13 21:23:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal coredns-76f75df574-xmx4z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali014caca9130 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" Namespace="kube-system" Pod="coredns-76f75df574-xmx4z" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.696 [INFO][4563] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" Namespace="kube-system" Pod="coredns-76f75df574-xmx4z" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.798 [INFO][4607] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" HandleID="k8s-pod-network.91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.849 [INFO][4607] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" HandleID="k8s-pod-network.91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000fc8f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", "pod":"coredns-76f75df574-xmx4z", "timestamp":"2025-01-13 21:24:01.798897734 +0000 UTC"}, Hostname:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.849 [INFO][4607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.849 [INFO][4607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.850 [INFO][4607] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal' Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.864 [INFO][4607] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.883 [INFO][4607] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.893 [INFO][4607] ipam/ipam.go 489: Trying affinity for 192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.897 [INFO][4607] ipam/ipam.go 155: Attempting to load block cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.901 [INFO][4607] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.901 [INFO][4607] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.904 [INFO][4607] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.914 [INFO][4607] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.932 [INFO][4607] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.82.70/26] block=192.168.82.64/26 handle="k8s-pod-network.91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.932 [INFO][4607] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.70/26] handle="k8s-pod-network.91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" host="ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal" Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.933 [INFO][4607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:01.972539 containerd[1476]: 2025-01-13 21:24:01.933 [INFO][4607] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.70/26] IPv6=[] ContainerID="91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" HandleID="k8s-pod-network.91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:01.973863 containerd[1476]: 2025-01-13 21:24:01.941 [INFO][4563] cni-plugin/k8s.go 386: Populated endpoint ContainerID="91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" Namespace="kube-system" Pod="coredns-76f75df574-xmx4z" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"149ffd71-7f07-4581-ab1a-ae1e649167c5", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-xmx4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali014caca9130", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:01.973863 containerd[1476]: 2025-01-13 21:24:01.941 [INFO][4563] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.82.70/32] ContainerID="91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" Namespace="kube-system" Pod="coredns-76f75df574-xmx4z" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:01.973863 containerd[1476]: 2025-01-13 21:24:01.942 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali014caca9130 ContainerID="91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" Namespace="kube-system" Pod="coredns-76f75df574-xmx4z" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:01.973863 containerd[1476]: 2025-01-13 21:24:01.945 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" Namespace="kube-system" Pod="coredns-76f75df574-xmx4z" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:01.973863 containerd[1476]: 2025-01-13 21:24:01.946 [INFO][4563] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" Namespace="kube-system" Pod="coredns-76f75df574-xmx4z" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"149ffd71-7f07-4581-ab1a-ae1e649167c5", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f", Pod:"coredns-76f75df574-xmx4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali014caca9130", MAC:"6e:60:b1:e8:68:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:01.973863 containerd[1476]: 2025-01-13 21:24:01.963 [INFO][4563] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f" Namespace="kube-system" Pod="coredns-76f75df574-xmx4z" WorkloadEndpoint="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:02.016619 systemd[1]: Started cri-containerd-fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629.scope - libcontainer container fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629. Jan 13 21:24:02.062776 containerd[1476]: time="2025-01-13T21:24:02.062163116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:24:02.062776 containerd[1476]: time="2025-01-13T21:24:02.062243522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:24:02.062776 containerd[1476]: time="2025-01-13T21:24:02.062273327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:02.062776 containerd[1476]: time="2025-01-13T21:24:02.062421368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:02.097536 systemd[1]: Started cri-containerd-91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f.scope - libcontainer container 91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f. Jan 13 21:24:02.121419 containerd[1476]: time="2025-01-13T21:24:02.121360659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jgmn7,Uid:47c77e8b-ecd7-41c6-9767-d6b9be1c4f20,Namespace:kube-system,Attempt:1,} returns sandbox id \"fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629\"" Jan 13 21:24:02.128163 containerd[1476]: time="2025-01-13T21:24:02.127920335Z" level=info msg="CreateContainer within sandbox \"fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:24:02.147229 containerd[1476]: time="2025-01-13T21:24:02.147187316Z" level=info msg="CreateContainer within sandbox \"fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ebe91231a777b067c189be6194c0ecf606670cfb8f9944c98d5a2af1289e4f3f\"" Jan 13 21:24:02.148161 containerd[1476]: time="2025-01-13T21:24:02.148126102Z" level=info msg="StartContainer for \"ebe91231a777b067c189be6194c0ecf606670cfb8f9944c98d5a2af1289e4f3f\"" Jan 13 21:24:02.198819 systemd[1]: Started cri-containerd-ebe91231a777b067c189be6194c0ecf606670cfb8f9944c98d5a2af1289e4f3f.scope - libcontainer container ebe91231a777b067c189be6194c0ecf606670cfb8f9944c98d5a2af1289e4f3f. Jan 13 21:24:02.202715 containerd[1476]: time="2025-01-13T21:24:02.202453135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xmx4z,Uid:149ffd71-7f07-4581-ab1a-ae1e649167c5,Namespace:kube-system,Attempt:1,} returns sandbox id \"91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f\"" Jan 13 21:24:02.208748 containerd[1476]: time="2025-01-13T21:24:02.207854825Z" level=info msg="CreateContainer within sandbox \"91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:24:02.239190 containerd[1476]: time="2025-01-13T21:24:02.239142930Z" level=info msg="CreateContainer within sandbox \"91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f94cb56424a3a35e0f5718813d58fadaf0583e82c1823cd8236647fc696af78e\"" Jan 13 21:24:02.242195 containerd[1476]: time="2025-01-13T21:24:02.241906206Z" level=info msg="StartContainer for \"f94cb56424a3a35e0f5718813d58fadaf0583e82c1823cd8236647fc696af78e\"" Jan 13 21:24:02.264987 containerd[1476]: time="2025-01-13T21:24:02.264603855Z" level=info msg="StartContainer for \"ebe91231a777b067c189be6194c0ecf606670cfb8f9944c98d5a2af1289e4f3f\" returns successfully" Jan 13 21:24:02.315578 systemd[1]: Started cri-containerd-f94cb56424a3a35e0f5718813d58fadaf0583e82c1823cd8236647fc696af78e.scope - libcontainer container f94cb56424a3a35e0f5718813d58fadaf0583e82c1823cd8236647fc696af78e. Jan 13 21:24:02.419438 containerd[1476]: time="2025-01-13T21:24:02.419321859Z" level=info msg="StartContainer for \"f94cb56424a3a35e0f5718813d58fadaf0583e82c1823cd8236647fc696af78e\" returns successfully" Jan 13 21:24:02.586704 systemd-networkd[1386]: cali63b99b6948b: Gained IPv6LL Jan 13 21:24:02.625846 kubelet[2638]: I0113 21:24:02.624735 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xmx4z" podStartSLOduration=34.624683374 podStartE2EDuration="34.624683374s" podCreationTimestamp="2025-01-13 21:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:24:02.623884757 +0000 UTC m=+47.621103729" watchObservedRunningTime="2025-01-13 21:24:02.624683374 +0000 UTC m=+47.621902330" Jan 13 21:24:02.651436 kubelet[2638]: I0113 21:24:02.651397 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jgmn7" podStartSLOduration=34.65127735 podStartE2EDuration="34.65127735s" podCreationTimestamp="2025-01-13 21:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:24:02.650860042 +0000 UTC m=+47.648078998" watchObservedRunningTime="2025-01-13 21:24:02.65127735 +0000 UTC m=+47.648496311" Jan 13 21:24:02.969767 systemd-networkd[1386]: cali7f9a9fe6b46: Gained IPv6LL Jan 13 21:24:03.702102 containerd[1476]: time="2025-01-13T21:24:03.702037740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:03.703322 containerd[1476]: time="2025-01-13T21:24:03.703224927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 21:24:03.704708 containerd[1476]: time="2025-01-13T21:24:03.704629621Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:03.707573 containerd[1476]: time="2025-01-13T21:24:03.707511383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:03.709262 containerd[1476]: time="2025-01-13T21:24:03.708473806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.474121159s" Jan 13 21:24:03.709262 containerd[1476]: time="2025-01-13T21:24:03.708518044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:24:03.710501 containerd[1476]: time="2025-01-13T21:24:03.709899938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:24:03.712567 containerd[1476]: time="2025-01-13T21:24:03.712463704Z" level=info msg="CreateContainer within sandbox \"08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:24:03.730840 containerd[1476]: time="2025-01-13T21:24:03.730795647Z" level=info msg="CreateContainer within sandbox \"08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"39daf5374a768d0d40747dc139fd96a713ecf5add8234081f775ad99fa31808b\"" Jan 13 21:24:03.734247 containerd[1476]: time="2025-01-13T21:24:03.733407008Z" level=info msg="StartContainer for \"39daf5374a768d0d40747dc139fd96a713ecf5add8234081f775ad99fa31808b\"" Jan 13 21:24:03.737595 systemd-networkd[1386]: cali014caca9130: Gained IPv6LL Jan 13 21:24:03.800511 systemd[1]: Started cri-containerd-39daf5374a768d0d40747dc139fd96a713ecf5add8234081f775ad99fa31808b.scope - libcontainer container 39daf5374a768d0d40747dc139fd96a713ecf5add8234081f775ad99fa31808b. Jan 13 21:24:03.864919 containerd[1476]: time="2025-01-13T21:24:03.864724975Z" level=info msg="StartContainer for \"39daf5374a768d0d40747dc139fd96a713ecf5add8234081f775ad99fa31808b\" returns successfully" Jan 13 21:24:03.948805 containerd[1476]: time="2025-01-13T21:24:03.948745706Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:03.950882 containerd[1476]: time="2025-01-13T21:24:03.950822973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 21:24:03.954392 containerd[1476]: time="2025-01-13T21:24:03.954113893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 244.169426ms" Jan 13 21:24:03.954392 containerd[1476]: time="2025-01-13T21:24:03.954175827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:24:03.958520 containerd[1476]: time="2025-01-13T21:24:03.958484776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:24:03.962786 containerd[1476]: time="2025-01-13T21:24:03.962745777Z" level=info msg="CreateContainer within sandbox \"c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:24:03.985358 containerd[1476]: time="2025-01-13T21:24:03.985324036Z" level=info msg="CreateContainer within sandbox \"c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"828e6321d61929f75b8eebbe703070b97cd94ff7babc198be3c2a2c1c3974f1e\"" Jan 13 21:24:03.987353 containerd[1476]: time="2025-01-13T21:24:03.986541935Z" level=info msg="StartContainer for \"828e6321d61929f75b8eebbe703070b97cd94ff7babc198be3c2a2c1c3974f1e\"" Jan 13 21:24:04.048840 systemd[1]: Started cri-containerd-828e6321d61929f75b8eebbe703070b97cd94ff7babc198be3c2a2c1c3974f1e.scope - libcontainer container 828e6321d61929f75b8eebbe703070b97cd94ff7babc198be3c2a2c1c3974f1e. Jan 13 21:24:04.121524 containerd[1476]: time="2025-01-13T21:24:04.120741586Z" level=info msg="StartContainer for \"828e6321d61929f75b8eebbe703070b97cd94ff7babc198be3c2a2c1c3974f1e\" returns successfully" Jan 13 21:24:04.679551 kubelet[2638]: I0113 21:24:04.679506 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b8b66ff96-zrfxq" podStartSLOduration=26.794978206 podStartE2EDuration="30.679451372s" podCreationTimestamp="2025-01-13 21:23:34 +0000 UTC" firstStartedPulling="2025-01-13 21:24:00.070149915 +0000 UTC m=+45.067368848" lastFinishedPulling="2025-01-13 21:24:03.954623069 +0000 UTC m=+48.951842014" observedRunningTime="2025-01-13 21:24:04.648111846 +0000 UTC m=+49.645330803" watchObservedRunningTime="2025-01-13 21:24:04.679451372 +0000 UTC m=+49.676670330" Jan 13 21:24:05.124445 containerd[1476]: time="2025-01-13T21:24:05.124387602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:05.127065 containerd[1476]: time="2025-01-13T21:24:05.126992508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:24:05.128990 containerd[1476]: time="2025-01-13T21:24:05.128673000Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:05.133574 containerd[1476]: time="2025-01-13T21:24:05.133534779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:05.139839 containerd[1476]: time="2025-01-13T21:24:05.139342920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.1808114s" Jan 13 21:24:05.139839 containerd[1476]: time="2025-01-13T21:24:05.139391729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:24:05.145442 containerd[1476]: time="2025-01-13T21:24:05.143726850Z" level=info msg="CreateContainer within sandbox \"9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:24:05.181237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount668078956.mount: Deactivated successfully. Jan 13 21:24:05.186524 containerd[1476]: time="2025-01-13T21:24:05.186231252Z" level=info msg="CreateContainer within sandbox \"9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"855cf1b5273ee381fb541201a8467454dabac078526331dd834f4f074f7965c5\"" Jan 13 21:24:05.188381 containerd[1476]: time="2025-01-13T21:24:05.187390484Z" level=info msg="StartContainer for \"855cf1b5273ee381fb541201a8467454dabac078526331dd834f4f074f7965c5\"" Jan 13 21:24:05.248515 systemd[1]: Started cri-containerd-855cf1b5273ee381fb541201a8467454dabac078526331dd834f4f074f7965c5.scope - libcontainer container 855cf1b5273ee381fb541201a8467454dabac078526331dd834f4f074f7965c5. Jan 13 21:24:05.341887 containerd[1476]: time="2025-01-13T21:24:05.341833168Z" level=info msg="StartContainer for \"855cf1b5273ee381fb541201a8467454dabac078526331dd834f4f074f7965c5\" returns successfully" Jan 13 21:24:05.344571 containerd[1476]: time="2025-01-13T21:24:05.344507256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:24:05.634059 kubelet[2638]: I0113 21:24:05.633940 2638 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:24:05.885726 systemd[1]: Started sshd@9-10.128.0.49:22-147.75.109.163:44218.service - OpenSSH per-connection server daemon (147.75.109.163:44218). Jan 13 21:24:06.072802 ntpd[1437]: Listen normally on 7 vxlan.calico 192.168.82.64:123 Jan 13 21:24:06.072938 ntpd[1437]: Listen normally on 8 vxlan.calico [fe80::64a2:72ff:fef7:687a%4]:123 Jan 13 21:24:06.073400 ntpd[1437]: 13 Jan 21:24:06 ntpd[1437]: Listen normally on 7 vxlan.calico 192.168.82.64:123 Jan 13 21:24:06.073400 ntpd[1437]: 13 Jan 21:24:06 ntpd[1437]: Listen normally on 8 vxlan.calico [fe80::64a2:72ff:fef7:687a%4]:123 Jan 13 21:24:06.073400 ntpd[1437]: 13 Jan 21:24:06 ntpd[1437]: Listen normally on 9 calie42bdd2b536 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 21:24:06.073400 ntpd[1437]: 13 Jan 21:24:06 ntpd[1437]: Listen normally on 10 calief0d75657fe [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:24:06.073400 ntpd[1437]: 13 Jan 21:24:06 ntpd[1437]: Listen normally on 11 calia6ea88aa65f [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 21:24:06.073400 ntpd[1437]: 13 Jan 21:24:06 ntpd[1437]: Listen normally on 12 cali63b99b6948b [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 21:24:06.073400 ntpd[1437]: 13 Jan 21:24:06 ntpd[1437]: Listen normally on 13 cali7f9a9fe6b46 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 21:24:06.073400 ntpd[1437]: 13 Jan 21:24:06 ntpd[1437]: Listen normally on 14 cali014caca9130 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:24:06.073019 ntpd[1437]: Listen normally on 9 calie42bdd2b536 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 21:24:06.073080 ntpd[1437]: Listen normally on 10 calief0d75657fe [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:24:06.073144 ntpd[1437]: Listen normally on 11 calia6ea88aa65f [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 21:24:06.073198 ntpd[1437]: Listen normally on 12 cali63b99b6948b [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 21:24:06.073252 ntpd[1437]: Listen normally on 13 cali7f9a9fe6b46 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 21:24:06.073366 ntpd[1437]: Listen normally on 14 cali014caca9130 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:24:06.208996 sshd[4951]: Accepted publickey for core from 147.75.109.163 port 44218 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:06.217660 sshd[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:06.236673 systemd-logind[1457]: New session 10 of user core. Jan 13 21:24:06.241605 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:24:06.641321 kubelet[2638]: I0113 21:24:06.637255 2638 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:24:06.675639 sshd[4951]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:06.686513 systemd[1]: sshd@9-10.128.0.49:22-147.75.109.163:44218.service: Deactivated successfully. Jan 13 21:24:06.693593 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:24:06.697768 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:24:06.701221 systemd-logind[1457]: Removed session 10. Jan 13 21:24:06.880010 containerd[1476]: time="2025-01-13T21:24:06.879943980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:06.882609 containerd[1476]: time="2025-01-13T21:24:06.882201424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:24:06.883929 containerd[1476]: time="2025-01-13T21:24:06.883628478Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:06.888540 containerd[1476]: time="2025-01-13T21:24:06.888496143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:06.889841 containerd[1476]: time="2025-01-13T21:24:06.889797848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.545215351s" Jan 13 21:24:06.889927 containerd[1476]: time="2025-01-13T21:24:06.889848007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:24:06.894669 containerd[1476]: time="2025-01-13T21:24:06.894188643Z" level=info msg="CreateContainer within sandbox \"9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:24:06.921496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013149778.mount: Deactivated successfully. Jan 13 21:24:06.925323 containerd[1476]: time="2025-01-13T21:24:06.922600435Z" level=info msg="CreateContainer within sandbox \"9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c1561f38732680cb9115d16c1dec74dc75029665fa303d81c3c2e953159de8d4\"" Jan 13 21:24:06.925758 containerd[1476]: time="2025-01-13T21:24:06.925714577Z" level=info msg="StartContainer for \"c1561f38732680cb9115d16c1dec74dc75029665fa303d81c3c2e953159de8d4\"" Jan 13 21:24:06.984801 systemd[1]: Started cri-containerd-c1561f38732680cb9115d16c1dec74dc75029665fa303d81c3c2e953159de8d4.scope - libcontainer container c1561f38732680cb9115d16c1dec74dc75029665fa303d81c3c2e953159de8d4. Jan 13 21:24:07.039209 containerd[1476]: time="2025-01-13T21:24:07.039145098Z" level=info msg="StartContainer for \"c1561f38732680cb9115d16c1dec74dc75029665fa303d81c3c2e953159de8d4\" returns successfully" Jan 13 21:24:07.408419 kubelet[2638]: I0113 21:24:07.408372 2638 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:24:07.408419 kubelet[2638]: I0113 21:24:07.408436 2638 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:24:07.625150 kubelet[2638]: I0113 21:24:07.625067 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b8b66ff96-wwr94" podStartSLOduration=29.898771282 podStartE2EDuration="33.624885901s" podCreationTimestamp="2025-01-13 21:23:34 +0000 UTC" firstStartedPulling="2025-01-13 21:23:59.982781093 +0000 UTC m=+44.980000043" lastFinishedPulling="2025-01-13 21:24:03.70889572 +0000 UTC m=+48.706114662" observedRunningTime="2025-01-13 21:24:04.679849829 +0000 UTC m=+49.677068784" watchObservedRunningTime="2025-01-13 21:24:07.624885901 +0000 UTC m=+52.622104860" Jan 13 21:24:07.678106 kubelet[2638]: I0113 21:24:07.677962 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-dnbrj" podStartSLOduration=27.688243474 podStartE2EDuration="33.677908804s" podCreationTimestamp="2025-01-13 21:23:34 +0000 UTC" firstStartedPulling="2025-01-13 21:24:00.900875078 +0000 UTC m=+45.898094021" lastFinishedPulling="2025-01-13 21:24:06.890540405 +0000 UTC m=+51.887759351" observedRunningTime="2025-01-13 21:24:07.676966154 +0000 UTC m=+52.674185112" watchObservedRunningTime="2025-01-13 21:24:07.677908804 +0000 UTC m=+52.675127766" Jan 13 21:24:11.730721 systemd[1]: Started sshd@10-10.128.0.49:22-147.75.109.163:42288.service - OpenSSH per-connection server daemon (147.75.109.163:42288). Jan 13 21:24:12.022845 sshd[5021]: Accepted publickey for core from 147.75.109.163 port 42288 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:12.024773 sshd[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:12.031494 systemd-logind[1457]: New session 11 of user core. Jan 13 21:24:12.035522 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:24:12.314646 sshd[5021]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:12.319755 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:24:12.320092 systemd[1]: sshd@10-10.128.0.49:22-147.75.109.163:42288.service: Deactivated successfully. Jan 13 21:24:12.323213 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:24:12.325797 systemd-logind[1457]: Removed session 11. Jan 13 21:24:15.176765 containerd[1476]: time="2025-01-13T21:24:15.176583296Z" level=info msg="StopPodSandbox for \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\"" Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.223 [WARNING][5048] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13b69416-81ae-4717-b37c-e84f2bf2d81a", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795", Pod:"csi-node-driver-dnbrj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63b99b6948b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.224 [INFO][5048] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.224 [INFO][5048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" iface="eth0" netns="" Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.224 [INFO][5048] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.224 [INFO][5048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.252 [INFO][5055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" HandleID="k8s-pod-network.9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.252 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.252 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.259 [WARNING][5055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" HandleID="k8s-pod-network.9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.259 [INFO][5055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" HandleID="k8s-pod-network.9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.261 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:15.264211 containerd[1476]: 2025-01-13 21:24:15.262 [INFO][5048] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:15.265545 containerd[1476]: time="2025-01-13T21:24:15.264261394Z" level=info msg="TearDown network for sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\" successfully" Jan 13 21:24:15.265545 containerd[1476]: time="2025-01-13T21:24:15.264319237Z" level=info msg="StopPodSandbox for \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\" returns successfully" Jan 13 21:24:15.265545 containerd[1476]: time="2025-01-13T21:24:15.265502991Z" level=info msg="RemovePodSandbox for \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\"" Jan 13 21:24:15.265701 containerd[1476]: time="2025-01-13T21:24:15.265544794Z" level=info msg="Forcibly stopping sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\"" Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.311 [WARNING][5074] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13b69416-81ae-4717-b37c-e84f2bf2d81a", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"9c0bcbf0a5670528fc6b85127eeac514856ed29373781f8dce24ba1121096795", Pod:"csi-node-driver-dnbrj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63b99b6948b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.311 [INFO][5074] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.311 [INFO][5074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" iface="eth0" netns="" Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.311 [INFO][5074] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.311 [INFO][5074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.340 [INFO][5080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" HandleID="k8s-pod-network.9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.340 [INFO][5080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.340 [INFO][5080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.347 [WARNING][5080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" HandleID="k8s-pod-network.9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.347 [INFO][5080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" HandleID="k8s-pod-network.9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-csi--node--driver--dnbrj-eth0" Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.349 [INFO][5080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:15.351965 containerd[1476]: 2025-01-13 21:24:15.350 [INFO][5074] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e" Jan 13 21:24:15.352875 containerd[1476]: time="2025-01-13T21:24:15.352012899Z" level=info msg="TearDown network for sandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\" successfully" Jan 13 21:24:15.356964 containerd[1476]: time="2025-01-13T21:24:15.356907627Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:24:15.356964 containerd[1476]: time="2025-01-13T21:24:15.357006999Z" level=info msg="RemovePodSandbox \"9093af30e2d9de3f46fe645cd5897a1b6b1e299b0392313e2b3949426b1b095e\" returns successfully" Jan 13 21:24:15.358450 containerd[1476]: time="2025-01-13T21:24:15.358033977Z" level=info msg="StopPodSandbox for \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\"" Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.416 [WARNING][5098] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0", GenerateName:"calico-apiserver-b8b66ff96-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4e98d21-89de-4cfb-bd25-889f4d6587ef", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b8b66ff96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920", Pod:"calico-apiserver-b8b66ff96-wwr94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief0d75657fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.416 [INFO][5098] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.416 [INFO][5098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" iface="eth0" netns="" Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.416 [INFO][5098] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.416 [INFO][5098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.450 [INFO][5105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" HandleID="k8s-pod-network.a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.451 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.451 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.458 [WARNING][5105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" HandleID="k8s-pod-network.a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.458 [INFO][5105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" HandleID="k8s-pod-network.a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.459 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:15.462148 containerd[1476]: 2025-01-13 21:24:15.460 [INFO][5098] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:24:15.463992 containerd[1476]: time="2025-01-13T21:24:15.462171709Z" level=info msg="TearDown network for sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\" successfully" Jan 13 21:24:15.463992 containerd[1476]: time="2025-01-13T21:24:15.462203378Z" level=info msg="StopPodSandbox for \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\" returns successfully" Jan 13 21:24:15.464812 containerd[1476]: time="2025-01-13T21:24:15.464374390Z" level=info msg="RemovePodSandbox for \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\"" Jan 13 21:24:15.464812 containerd[1476]: time="2025-01-13T21:24:15.464418544Z" level=info msg="Forcibly stopping sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\"" Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.508 [WARNING][5123] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0", GenerateName:"calico-apiserver-b8b66ff96-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4e98d21-89de-4cfb-bd25-889f4d6587ef", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b8b66ff96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"08506534ecdcb1197e1372820502eec6f8d9be56738c56b811df311e0192b920", Pod:"calico-apiserver-b8b66ff96-wwr94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief0d75657fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.508 [INFO][5123] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.508 [INFO][5123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" iface="eth0" netns="" Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.508 [INFO][5123] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.508 [INFO][5123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.534 [INFO][5130] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" HandleID="k8s-pod-network.a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.534 [INFO][5130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.535 [INFO][5130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.541 [WARNING][5130] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" HandleID="k8s-pod-network.a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.541 [INFO][5130] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" HandleID="k8s-pod-network.a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--wwr94-eth0" Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.543 [INFO][5130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:15.545647 containerd[1476]: 2025-01-13 21:24:15.544 [INFO][5123] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64" Jan 13 21:24:15.546552 containerd[1476]: time="2025-01-13T21:24:15.545653391Z" level=info msg="TearDown network for sandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\" successfully" Jan 13 21:24:15.550675 containerd[1476]: time="2025-01-13T21:24:15.550575935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:24:15.550675 containerd[1476]: time="2025-01-13T21:24:15.550665914Z" level=info msg="RemovePodSandbox \"a6b2a0352e390c1ef8394f01e662750f636fa900999f6d9848a8e12f7d24db64\" returns successfully" Jan 13 21:24:15.551411 containerd[1476]: time="2025-01-13T21:24:15.551349269Z" level=info msg="StopPodSandbox for \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\"" Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.594 [WARNING][5148] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"47c77e8b-ecd7-41c6-9767-d6b9be1c4f20", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629", Pod:"coredns-76f75df574-jgmn7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f9a9fe6b46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.595 [INFO][5148] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.595 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" iface="eth0" netns="" Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.595 [INFO][5148] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.595 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.634 [INFO][5155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" HandleID="k8s-pod-network.dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.634 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.634 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.646 [WARNING][5155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" HandleID="k8s-pod-network.dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.646 [INFO][5155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" HandleID="k8s-pod-network.dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.648 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:15.650941 containerd[1476]: 2025-01-13 21:24:15.649 [INFO][5148] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:15.652319 containerd[1476]: time="2025-01-13T21:24:15.651050203Z" level=info msg="TearDown network for sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\" successfully" Jan 13 21:24:15.652319 containerd[1476]: time="2025-01-13T21:24:15.651086107Z" level=info msg="StopPodSandbox for \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\" returns successfully" Jan 13 21:24:15.652319 containerd[1476]: time="2025-01-13T21:24:15.651592020Z" level=info msg="RemovePodSandbox for \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\"" Jan 13 21:24:15.652319 containerd[1476]: time="2025-01-13T21:24:15.651629770Z" level=info msg="Forcibly stopping sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\"" Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.699 [WARNING][5173] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"47c77e8b-ecd7-41c6-9767-d6b9be1c4f20", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"fb1a37f8ba0f9cc6e08cf44daf0052fa4a26c0fc392a12581d752d540ee93629", Pod:"coredns-76f75df574-jgmn7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f9a9fe6b46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.700 [INFO][5173] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.700 [INFO][5173] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" iface="eth0" netns="" Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.700 [INFO][5173] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.700 [INFO][5173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.726 [INFO][5179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" HandleID="k8s-pod-network.dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.727 [INFO][5179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.727 [INFO][5179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.739 [WARNING][5179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" HandleID="k8s-pod-network.dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.739 [INFO][5179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" HandleID="k8s-pod-network.dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--jgmn7-eth0" Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.742 [INFO][5179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:15.746539 containerd[1476]: 2025-01-13 21:24:15.745 [INFO][5173] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c" Jan 13 21:24:15.747541 containerd[1476]: time="2025-01-13T21:24:15.746571298Z" level=info msg="TearDown network for sandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\" successfully" Jan 13 21:24:15.750600 containerd[1476]: time="2025-01-13T21:24:15.750551307Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:24:15.750888 containerd[1476]: time="2025-01-13T21:24:15.750629598Z" level=info msg="RemovePodSandbox \"dae858e32f0c36c6488763e233ea6ae4dbff540d7ec9f3acc2d5a54d32303e5c\" returns successfully" Jan 13 21:24:15.751375 containerd[1476]: time="2025-01-13T21:24:15.751321907Z" level=info msg="StopPodSandbox for \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\"" Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.794 [WARNING][5197] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0", GenerateName:"calico-kube-controllers-6f864b4c96-", Namespace:"calico-system", SelfLink:"", UID:"ea8ed1ad-5383-412b-b6fb-8d38f75a663b", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f864b4c96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306", Pod:"calico-kube-controllers-6f864b4c96-r64tw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie42bdd2b536", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.795 [INFO][5197] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.795 [INFO][5197] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" iface="eth0" netns="" Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.795 [INFO][5197] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.795 [INFO][5197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.823 [INFO][5203] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" HandleID="k8s-pod-network.fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.823 [INFO][5203] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.823 [INFO][5203] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.830 [WARNING][5203] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" HandleID="k8s-pod-network.fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.830 [INFO][5203] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" HandleID="k8s-pod-network.fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.831 [INFO][5203] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:15.833916 containerd[1476]: 2025-01-13 21:24:15.832 [INFO][5197] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:24:15.833916 containerd[1476]: time="2025-01-13T21:24:15.833885274Z" level=info msg="TearDown network for sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\" successfully" Jan 13 21:24:15.833916 containerd[1476]: time="2025-01-13T21:24:15.833921636Z" level=info msg="StopPodSandbox for \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\" returns successfully" Jan 13 21:24:15.835351 containerd[1476]: time="2025-01-13T21:24:15.834486197Z" level=info msg="RemovePodSandbox for \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\"" Jan 13 21:24:15.835351 containerd[1476]: time="2025-01-13T21:24:15.834521820Z" level=info msg="Forcibly stopping sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\"" Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.878 [WARNING][5221] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0", GenerateName:"calico-kube-controllers-6f864b4c96-", Namespace:"calico-system", SelfLink:"", UID:"ea8ed1ad-5383-412b-b6fb-8d38f75a663b", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f864b4c96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"8024dd39911442fdb8cd5bbf22e36a50b96eb4546095a60181c5d46a67311306", Pod:"calico-kube-controllers-6f864b4c96-r64tw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie42bdd2b536", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.878 [INFO][5221] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.878 [INFO][5221] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" iface="eth0" netns="" Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.878 [INFO][5221] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.878 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.903 [INFO][5227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" HandleID="k8s-pod-network.fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.903 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.903 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.912 [WARNING][5227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" HandleID="k8s-pod-network.fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.913 [INFO][5227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" HandleID="k8s-pod-network.fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--kube--controllers--6f864b4c96--r64tw-eth0" Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.914 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:15.917778 containerd[1476]: 2025-01-13 21:24:15.916 [INFO][5221] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b" Jan 13 21:24:15.920640 containerd[1476]: time="2025-01-13T21:24:15.917821615Z" level=info msg="TearDown network for sandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\" successfully" Jan 13 21:24:15.922943 containerd[1476]: time="2025-01-13T21:24:15.922864628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:24:15.923056 containerd[1476]: time="2025-01-13T21:24:15.922950319Z" level=info msg="RemovePodSandbox \"fd046ca68206396e020b00bc22b31a6b0e6d0e3e844d3e6cd10631fadd059d2b\" returns successfully" Jan 13 21:24:15.924327 containerd[1476]: time="2025-01-13T21:24:15.924049233Z" level=info msg="StopPodSandbox for \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\"" Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:15.971 [WARNING][5246] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"149ffd71-7f07-4581-ab1a-ae1e649167c5", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f", Pod:"coredns-76f75df574-xmx4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali014caca9130", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:15.972 [INFO][5246] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:15.972 [INFO][5246] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" iface="eth0" netns="" Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:15.972 [INFO][5246] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:15.972 [INFO][5246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:16.007 [INFO][5252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" HandleID="k8s-pod-network.80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:16.008 [INFO][5252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:16.008 [INFO][5252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:16.014 [WARNING][5252] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" HandleID="k8s-pod-network.80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:16.014 [INFO][5252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" HandleID="k8s-pod-network.80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:16.016 [INFO][5252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:16.019177 containerd[1476]: 2025-01-13 21:24:16.017 [INFO][5246] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:16.019177 containerd[1476]: time="2025-01-13T21:24:16.019071304Z" level=info msg="TearDown network for sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\" successfully" Jan 13 21:24:16.019177 containerd[1476]: time="2025-01-13T21:24:16.019107028Z" level=info msg="StopPodSandbox for \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\" returns successfully" Jan 13 21:24:16.020710 containerd[1476]: time="2025-01-13T21:24:16.019698033Z" level=info msg="RemovePodSandbox for \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\"" Jan 13 21:24:16.020710 containerd[1476]: time="2025-01-13T21:24:16.019739885Z" level=info msg="Forcibly stopping sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\"" Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.075 [WARNING][5271] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"149ffd71-7f07-4581-ab1a-ae1e649167c5", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"91b38218d9ed94b97aee1c7a4d9333432a25e71fa13aa1394c61483c22984f3f", Pod:"coredns-76f75df574-xmx4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali014caca9130", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.075 [INFO][5271] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.076 [INFO][5271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" iface="eth0" netns="" Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.076 [INFO][5271] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.076 [INFO][5271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.107 [INFO][5277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" HandleID="k8s-pod-network.80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.107 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.107 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.114 [WARNING][5277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" HandleID="k8s-pod-network.80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.114 [INFO][5277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" HandleID="k8s-pod-network.80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-coredns--76f75df574--xmx4z-eth0" Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.116 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:16.120558 containerd[1476]: 2025-01-13 21:24:16.118 [INFO][5271] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964" Jan 13 21:24:16.120558 containerd[1476]: time="2025-01-13T21:24:16.120557152Z" level=info msg="TearDown network for sandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\" successfully" Jan 13 21:24:16.415229 containerd[1476]: time="2025-01-13T21:24:16.412630423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:24:16.415229 containerd[1476]: time="2025-01-13T21:24:16.412855106Z" level=info msg="RemovePodSandbox \"80bbc8ac72eed508cddcb32560d9db3214b6a6842400d56f7939d41f69ac2964\" returns successfully" Jan 13 21:24:16.418181 containerd[1476]: time="2025-01-13T21:24:16.417158432Z" level=info msg="StopPodSandbox for \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\"" Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.461 [WARNING][5295] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0", GenerateName:"calico-apiserver-b8b66ff96-", Namespace:"calico-apiserver", SelfLink:"", UID:"8743f2ec-4b00-4436-8e8d-31d6c433e17f", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b8b66ff96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9", Pod:"calico-apiserver-b8b66ff96-zrfxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6ea88aa65f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.461 [INFO][5295] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.461 [INFO][5295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" iface="eth0" netns="" Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.461 [INFO][5295] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.461 [INFO][5295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.487 [INFO][5301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" HandleID="k8s-pod-network.e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.487 [INFO][5301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.487 [INFO][5301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.494 [WARNING][5301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" HandleID="k8s-pod-network.e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.494 [INFO][5301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" HandleID="k8s-pod-network.e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.498 [INFO][5301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:16.500952 containerd[1476]: 2025-01-13 21:24:16.499 [INFO][5295] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:24:16.502423 containerd[1476]: time="2025-01-13T21:24:16.500989834Z" level=info msg="TearDown network for sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\" successfully" Jan 13 21:24:16.502423 containerd[1476]: time="2025-01-13T21:24:16.501025503Z" level=info msg="StopPodSandbox for \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\" returns successfully" Jan 13 21:24:16.502423 containerd[1476]: time="2025-01-13T21:24:16.501773661Z" level=info msg="RemovePodSandbox for \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\"" Jan 13 21:24:16.502423 containerd[1476]: time="2025-01-13T21:24:16.501810591Z" level=info msg="Forcibly stopping sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\"" Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.548 [WARNING][5319] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0", GenerateName:"calico-apiserver-b8b66ff96-", Namespace:"calico-apiserver", SelfLink:"", UID:"8743f2ec-4b00-4436-8e8d-31d6c433e17f", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b8b66ff96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-092876cd8e111f5f553b.c.flatcar-212911.internal", ContainerID:"c9a41f767e1686818f602e80a5a0d34aef673efccd83a657d992d4295b3da9f9", Pod:"calico-apiserver-b8b66ff96-zrfxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6ea88aa65f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.549 [INFO][5319] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.549 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" iface="eth0" netns="" Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.549 [INFO][5319] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.549 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.574 [INFO][5326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" HandleID="k8s-pod-network.e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.574 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.574 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.581 [WARNING][5326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" HandleID="k8s-pod-network.e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.581 [INFO][5326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" HandleID="k8s-pod-network.e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Workload="ci--4081--3--0--092876cd8e111f5f553b.c.flatcar--212911.internal-k8s-calico--apiserver--b8b66ff96--zrfxq-eth0" Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.582 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:24:16.585563 containerd[1476]: 2025-01-13 21:24:16.584 [INFO][5319] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8" Jan 13 21:24:16.586480 containerd[1476]: time="2025-01-13T21:24:16.585587597Z" level=info msg="TearDown network for sandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\" successfully" Jan 13 21:24:16.768279 containerd[1476]: time="2025-01-13T21:24:16.767964590Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:24:16.768279 containerd[1476]: time="2025-01-13T21:24:16.768059905Z" level=info msg="RemovePodSandbox \"e73f6a3c6a1c7c428fbeca57f8a134e30186b06075c9a2a2827e3ace914b22d8\" returns successfully" Jan 13 21:24:17.371559 systemd[1]: Started sshd@11-10.128.0.49:22-147.75.109.163:47504.service - OpenSSH per-connection server daemon (147.75.109.163:47504). Jan 13 21:24:17.658723 sshd[5333]: Accepted publickey for core from 147.75.109.163 port 47504 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:17.660921 sshd[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:17.666934 systemd-logind[1457]: New session 12 of user core. Jan 13 21:24:17.674539 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:24:17.948226 sshd[5333]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:17.953628 systemd[1]: sshd@11-10.128.0.49:22-147.75.109.163:47504.service: Deactivated successfully. Jan 13 21:24:17.956521 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:24:17.958763 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:24:17.960793 systemd-logind[1457]: Removed session 12. Jan 13 21:24:18.006716 systemd[1]: Started sshd@12-10.128.0.49:22-147.75.109.163:47516.service - OpenSSH per-connection server daemon (147.75.109.163:47516). Jan 13 21:24:18.299500 sshd[5353]: Accepted publickey for core from 147.75.109.163 port 47516 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:18.301463 sshd[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:18.308026 systemd-logind[1457]: New session 13 of user core. Jan 13 21:24:18.314511 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:24:18.622598 sshd[5353]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:18.631091 systemd[1]: sshd@12-10.128.0.49:22-147.75.109.163:47516.service: Deactivated successfully. Jan 13 21:24:18.631147 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:24:18.637400 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:24:18.644552 systemd-logind[1457]: Removed session 13. Jan 13 21:24:18.682644 systemd[1]: Started sshd@13-10.128.0.49:22-147.75.109.163:47518.service - OpenSSH per-connection server daemon (147.75.109.163:47518). Jan 13 21:24:18.989690 sshd[5364]: Accepted publickey for core from 147.75.109.163 port 47518 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:18.991612 sshd[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:18.997386 systemd-logind[1457]: New session 14 of user core. Jan 13 21:24:19.002485 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:24:19.277976 sshd[5364]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:19.283584 systemd[1]: sshd@13-10.128.0.49:22-147.75.109.163:47518.service: Deactivated successfully. Jan 13 21:24:19.286159 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:24:19.287416 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:24:19.288989 systemd-logind[1457]: Removed session 14. Jan 13 21:24:24.331721 systemd[1]: Started sshd@14-10.128.0.49:22-147.75.109.163:47530.service - OpenSSH per-connection server daemon (147.75.109.163:47530). Jan 13 21:24:24.623764 sshd[5400]: Accepted publickey for core from 147.75.109.163 port 47530 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:24.625821 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:24.632135 systemd-logind[1457]: New session 15 of user core. Jan 13 21:24:24.640542 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:24:24.911423 sshd[5400]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:24.916550 systemd[1]: sshd@14-10.128.0.49:22-147.75.109.163:47530.service: Deactivated successfully. Jan 13 21:24:24.919279 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:24:24.921559 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:24:24.923483 systemd-logind[1457]: Removed session 15. Jan 13 21:24:29.966017 systemd[1]: Started sshd@15-10.128.0.49:22-147.75.109.163:59540.service - OpenSSH per-connection server daemon (147.75.109.163:59540). Jan 13 21:24:30.257482 sshd[5438]: Accepted publickey for core from 147.75.109.163 port 59540 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:30.259394 sshd[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:30.265879 systemd-logind[1457]: New session 16 of user core. Jan 13 21:24:30.270660 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:24:30.541680 sshd[5438]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:30.547585 systemd[1]: sshd@15-10.128.0.49:22-147.75.109.163:59540.service: Deactivated successfully. Jan 13 21:24:30.550431 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:24:30.551791 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:24:30.553323 systemd-logind[1457]: Removed session 16. Jan 13 21:24:31.845269 kubelet[2638]: I0113 21:24:31.844812 2638 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:24:35.597140 systemd[1]: Started sshd@16-10.128.0.49:22-147.75.109.163:59544.service - OpenSSH per-connection server daemon (147.75.109.163:59544). Jan 13 21:24:35.898761 sshd[5453]: Accepted publickey for core from 147.75.109.163 port 59544 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:35.900529 sshd[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:35.907146 systemd-logind[1457]: New session 17 of user core. Jan 13 21:24:35.913541 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:24:36.220786 sshd[5453]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:36.227477 systemd[1]: sshd@16-10.128.0.49:22-147.75.109.163:59544.service: Deactivated successfully. Jan 13 21:24:36.230159 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:24:36.232517 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:24:36.234015 systemd-logind[1457]: Removed session 17. Jan 13 21:24:41.276742 systemd[1]: Started sshd@17-10.128.0.49:22-147.75.109.163:52706.service - OpenSSH per-connection server daemon (147.75.109.163:52706). Jan 13 21:24:41.563930 sshd[5473]: Accepted publickey for core from 147.75.109.163 port 52706 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:41.565941 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:41.572441 systemd-logind[1457]: New session 18 of user core. Jan 13 21:24:41.577494 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:24:41.854674 sshd[5473]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:41.860712 systemd[1]: sshd@17-10.128.0.49:22-147.75.109.163:52706.service: Deactivated successfully. Jan 13 21:24:41.863062 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:24:41.864139 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:24:41.865689 systemd-logind[1457]: Removed session 18. Jan 13 21:24:41.911655 systemd[1]: Started sshd@18-10.128.0.49:22-147.75.109.163:52708.service - OpenSSH per-connection server daemon (147.75.109.163:52708). Jan 13 21:24:42.205971 sshd[5485]: Accepted publickey for core from 147.75.109.163 port 52708 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:42.207896 sshd[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:42.214777 systemd-logind[1457]: New session 19 of user core. Jan 13 21:24:42.220497 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:24:42.553584 sshd[5485]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:42.558439 systemd[1]: sshd@18-10.128.0.49:22-147.75.109.163:52708.service: Deactivated successfully. Jan 13 21:24:42.561215 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:24:42.563454 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:24:42.564969 systemd-logind[1457]: Removed session 19. Jan 13 21:24:42.609626 systemd[1]: Started sshd@19-10.128.0.49:22-147.75.109.163:52724.service - OpenSSH per-connection server daemon (147.75.109.163:52724). Jan 13 21:24:42.902368 sshd[5497]: Accepted publickey for core from 147.75.109.163 port 52724 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:42.904522 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:42.910356 systemd-logind[1457]: New session 20 of user core. Jan 13 21:24:42.916516 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:24:44.932023 sshd[5497]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:44.942053 systemd[1]: sshd@19-10.128.0.49:22-147.75.109.163:52724.service: Deactivated successfully. Jan 13 21:24:44.946745 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:24:44.950544 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:24:44.952997 systemd-logind[1457]: Removed session 20. Jan 13 21:24:44.988662 systemd[1]: Started sshd@20-10.128.0.49:22-147.75.109.163:52728.service - OpenSSH per-connection server daemon (147.75.109.163:52728). Jan 13 21:24:45.274335 sshd[5515]: Accepted publickey for core from 147.75.109.163 port 52728 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:45.275617 sshd[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:45.282261 systemd-logind[1457]: New session 21 of user core. Jan 13 21:24:45.288521 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:24:45.702161 sshd[5515]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:45.707071 systemd[1]: sshd@20-10.128.0.49:22-147.75.109.163:52728.service: Deactivated successfully. Jan 13 21:24:45.710018 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:24:45.712642 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:24:45.714934 systemd-logind[1457]: Removed session 21. Jan 13 21:24:45.762679 systemd[1]: Started sshd@21-10.128.0.49:22-147.75.109.163:52738.service - OpenSSH per-connection server daemon (147.75.109.163:52738). Jan 13 21:24:46.051726 sshd[5526]: Accepted publickey for core from 147.75.109.163 port 52738 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:46.053566 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:46.061763 systemd-logind[1457]: New session 22 of user core. Jan 13 21:24:46.067492 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:24:46.338908 sshd[5526]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:46.344113 systemd[1]: sshd@21-10.128.0.49:22-147.75.109.163:52738.service: Deactivated successfully. Jan 13 21:24:46.347172 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:24:46.349900 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:24:46.351802 systemd-logind[1457]: Removed session 22. Jan 13 21:24:51.394657 systemd[1]: Started sshd@22-10.128.0.49:22-147.75.109.163:57984.service - OpenSSH per-connection server daemon (147.75.109.163:57984). Jan 13 21:24:51.440139 systemd[1]: run-containerd-runc-k8s.io-18f6c430d36240205a2af7843ef2242da11468f0047dbc6a855f8822a92c2e53-runc.ru0keJ.mount: Deactivated successfully. Jan 13 21:24:51.692598 sshd[5538]: Accepted publickey for core from 147.75.109.163 port 57984 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:51.694280 sshd[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:51.700885 systemd-logind[1457]: New session 23 of user core. Jan 13 21:24:51.705508 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:24:51.977383 sshd[5538]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:51.983485 systemd[1]: sshd@22-10.128.0.49:22-147.75.109.163:57984.service: Deactivated successfully. Jan 13 21:24:51.986116 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:24:51.987461 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:24:51.989188 systemd-logind[1457]: Removed session 23. Jan 13 21:24:52.900343 systemd[1]: run-containerd-runc-k8s.io-b17384fabb343f84ca39f37bf71b93c2131926d5fcd2f401c29cc2254cd8a8da-runc.tF2gv0.mount: Deactivated successfully. Jan 13 21:24:57.040119 systemd[1]: Started sshd@23-10.128.0.49:22-147.75.109.163:57990.service - OpenSSH per-connection server daemon (147.75.109.163:57990). Jan 13 21:24:57.334834 sshd[5593]: Accepted publickey for core from 147.75.109.163 port 57990 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:57.334617 sshd[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:57.347405 systemd-logind[1457]: New session 24 of user core. Jan 13 21:24:57.351507 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:24:57.619586 sshd[5593]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:57.625499 systemd[1]: sshd@23-10.128.0.49:22-147.75.109.163:57990.service: Deactivated successfully. Jan 13 21:24:57.630112 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:24:57.633239 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:24:57.638017 systemd-logind[1457]: Removed session 24. Jan 13 21:25:02.672705 systemd[1]: Started sshd@24-10.128.0.49:22-147.75.109.163:38964.service - OpenSSH per-connection server daemon (147.75.109.163:38964). Jan 13 21:25:02.965947 sshd[5627]: Accepted publickey for core from 147.75.109.163 port 38964 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:25:02.968015 sshd[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:02.975080 systemd-logind[1457]: New session 25 of user core. Jan 13 21:25:02.980532 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:25:03.282559 sshd[5627]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:03.289507 systemd[1]: sshd@24-10.128.0.49:22-147.75.109.163:38964.service: Deactivated successfully. Jan 13 21:25:03.293964 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:25:03.296547 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:25:03.299165 systemd-logind[1457]: Removed session 25.