Jan 17 12:21:05.090039 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:21:05.090090 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:05.090108 kernel: BIOS-provided physical RAM map: Jan 17 12:21:05.090122 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 12:21:05.090136 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 12:21:05.090149 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 12:21:05.090167 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 12:21:05.090186 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 12:21:05.090200 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 12:21:05.090214 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 12:21:05.090229 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 12:21:05.090244 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 12:21:05.090258 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 12:21:05.090274 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 12:21:05.090297 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 12:21:05.090314 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 12:21:05.090331 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 12:21:05.090347 kernel: NX (Execute Disable) protection: active Jan 17 12:21:05.090363 kernel: APIC: Static calls initialized Jan 17 12:21:05.090379 kernel: efi: EFI v2.7 by EDK II Jan 17 12:21:05.090395 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 17 12:21:05.090411 kernel: SMBIOS 2.4 present. Jan 17 12:21:05.090427 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 17 12:21:05.090442 kernel: Hypervisor detected: KVM Jan 17 12:21:05.090463 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:21:05.090480 kernel: kvm-clock: using sched offset of 12322430719 cycles Jan 17 12:21:05.090498 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:21:05.090515 kernel: tsc: Detected 2299.998 MHz processor Jan 17 12:21:05.090532 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:21:05.090550 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:21:05.090567 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 12:21:05.090584 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 12:21:05.090601 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:21:05.090622 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 12:21:05.090639 kernel: Using GB pages for direct mapping Jan 17 12:21:05.090655 kernel: Secure boot disabled Jan 17 12:21:05.090672 kernel: ACPI: Early table checksum verification disabled Jan 17 12:21:05.090696 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 12:21:05.090713 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 12:21:05.090731 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 12:21:05.090755 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 12:21:05.090777 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 12:21:05.090795 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 17 12:21:05.090813 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 12:21:05.090831 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 12:21:05.090850 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 12:21:05.090868 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 12:21:05.090889 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 12:21:05.090907 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 12:21:05.090950 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 12:21:05.090968 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 12:21:05.090986 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 12:21:05.091004 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 12:21:05.091022 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 12:21:05.091040 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 12:21:05.091056 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 12:21:05.091077 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 12:21:05.091095 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:21:05.091113 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:21:05.091132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:21:05.091149 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 12:21:05.091167 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 12:21:05.091185 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 12:21:05.091202 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 12:21:05.091219 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 17 12:21:05.091241 kernel: Zone ranges: Jan 17 12:21:05.091258 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:21:05.091274 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:21:05.091292 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:21:05.091310 kernel: Movable zone start for each node Jan 17 12:21:05.091328 kernel: Early memory node ranges Jan 17 12:21:05.091346 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 12:21:05.091364 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 12:21:05.091382 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 12:21:05.091404 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 12:21:05.091422 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:21:05.091440 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 12:21:05.091458 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:21:05.091476 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 12:21:05.091495 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 12:21:05.091513 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 12:21:05.091531 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 12:21:05.091549 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 12:21:05.091571 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:21:05.091589 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:21:05.091606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:21:05.091624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:21:05.091643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:21:05.091661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:21:05.091679 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:21:05.091704 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:21:05.091723 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:21:05.091745 kernel: Booting paravirtualized kernel on KVM Jan 17 12:21:05.091764 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:21:05.091782 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:21:05.091800 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:21:05.091817 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:21:05.091834 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:21:05.091852 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:21:05.091871 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:21:05.091891 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:05.091927 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:21:05.091942 kernel: random: crng init done Jan 17 12:21:05.091957 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 12:21:05.091971 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:21:05.091984 kernel: Fallback order for Node 0: 0 Jan 17 12:21:05.091998 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 12:21:05.092014 kernel: Policy zone: Normal Jan 17 12:21:05.092028 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:21:05.092048 kernel: software IO TLB: area num 2. Jan 17 12:21:05.092063 kernel: Memory: 7513376K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 346948K reserved, 0K cma-reserved) Jan 17 12:21:05.092077 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:21:05.092092 kernel: Kernel/User page tables isolation: enabled Jan 17 12:21:05.092106 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:21:05.092121 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:21:05.092135 kernel: Dynamic Preempt: voluntary Jan 17 12:21:05.092151 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:21:05.092174 kernel: rcu: RCU event tracing is enabled. Jan 17 12:21:05.092209 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:21:05.092226 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:21:05.092243 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:21:05.092264 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:21:05.092282 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:21:05.092299 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:21:05.092318 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:21:05.092335 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:21:05.092372 kernel: Console: colour dummy device 80x25 Jan 17 12:21:05.092395 kernel: printk: console [ttyS0] enabled Jan 17 12:21:05.092414 kernel: ACPI: Core revision 20230628 Jan 17 12:21:05.092452 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:21:05.092469 kernel: x2apic enabled Jan 17 12:21:05.092486 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:21:05.092503 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 12:21:05.092523 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:21:05.092542 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 12:21:05.092565 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 12:21:05.092584 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 12:21:05.092603 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:21:05.092621 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:21:05.092640 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:21:05.092657 kernel: Spectre V2 : Mitigation: IBRS Jan 17 12:21:05.092676 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:21:05.092703 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:21:05.092722 kernel: RETBleed: Mitigation: IBRS Jan 17 12:21:05.092745 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:21:05.092764 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 12:21:05.092782 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:21:05.092801 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:21:05.092819 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:21:05.092837 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:21:05.092856 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:21:05.092874 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:21:05.092893 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:21:05.092932 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:21:05.092952 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:21:05.092970 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:21:05.092989 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:21:05.093007 kernel: landlock: Up and running. Jan 17 12:21:05.093025 kernel: SELinux: Initializing. Jan 17 12:21:05.093043 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:21:05.093062 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:21:05.093080 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 12:21:05.093104 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:05.093122 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:05.093141 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:05.093160 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 12:21:05.093179 kernel: signal: max sigframe size: 1776 Jan 17 12:21:05.093197 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:21:05.093216 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:21:05.093234 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:21:05.093253 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:21:05.093275 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:21:05.093294 kernel: .... node #0, CPUs: #1 Jan 17 12:21:05.093313 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 12:21:05.093333 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:21:05.093351 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:21:05.093370 kernel: smpboot: Max logical packages: 1 Jan 17 12:21:05.093388 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 12:21:05.093406 kernel: devtmpfs: initialized Jan 17 12:21:05.093428 kernel: x86/mm: Memory block size: 128MB Jan 17 12:21:05.093447 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 12:21:05.093466 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:21:05.093484 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:21:05.093503 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:21:05.093523 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:21:05.093541 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:21:05.093560 kernel: audit: type=2000 audit(1737116464.137:1): state=initialized audit_enabled=0 res=1 Jan 17 12:21:05.093577 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:21:05.093599 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:21:05.093618 kernel: cpuidle: using governor menu Jan 17 12:21:05.093636 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:21:05.093655 kernel: dca service started, version 1.12.1 Jan 17 12:21:05.093673 kernel: PCI: Using configuration type 1 for base access Jan 17 12:21:05.093697 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:21:05.093715 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:21:05.093745 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:21:05.093765 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:21:05.093789 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:21:05.093806 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:21:05.093823 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:21:05.093844 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:21:05.093863 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:21:05.093882 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 12:21:05.093902 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:21:05.093954 kernel: ACPI: Interpreter enabled Jan 17 12:21:05.093974 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:21:05.093997 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:21:05.094018 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:21:05.094038 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 12:21:05.094057 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 12:21:05.094076 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:21:05.094360 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:21:05.094572 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:21:05.094772 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:21:05.094797 kernel: PCI host bridge to bus 0000:00 Jan 17 12:21:05.095024 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:21:05.095206 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:21:05.095373 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:21:05.095537 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 12:21:05.095707 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:21:05.095910 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:21:05.096143 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 12:21:05.096330 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:21:05.096508 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 12:21:05.096721 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 12:21:05.096904 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 12:21:05.097142 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 12:21:05.097333 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:21:05.097522 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 12:21:05.097730 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 12:21:05.097976 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:21:05.098171 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 12:21:05.098355 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 12:21:05.098387 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:21:05.098408 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:21:05.098429 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:21:05.098447 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:21:05.098467 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:21:05.098487 kernel: iommu: Default domain type: Translated Jan 17 12:21:05.098507 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:21:05.098526 kernel: efivars: Registered efivars operations Jan 17 12:21:05.098546 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:21:05.098570 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:21:05.098590 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 12:21:05.098610 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 12:21:05.098629 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 12:21:05.098649 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 12:21:05.098668 kernel: vgaarb: loaded Jan 17 12:21:05.098695 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:21:05.098715 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:21:05.098736 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:21:05.098760 kernel: pnp: PnP ACPI init Jan 17 12:21:05.098780 kernel: pnp: PnP ACPI: found 7 devices Jan 17 12:21:05.098801 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:21:05.098821 kernel: NET: Registered PF_INET protocol family Jan 17 12:21:05.098840 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:21:05.098861 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 12:21:05.098881 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:21:05.098900 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:21:05.098945 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 12:21:05.098970 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 12:21:05.098990 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:21:05.099010 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:21:05.099030 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:21:05.099050 kernel: NET: Registered PF_XDP protocol family Jan 17 12:21:05.099234 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:21:05.099400 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:21:05.099566 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:21:05.099747 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 12:21:05.099969 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:21:05.099993 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:21:05.100009 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:21:05.100026 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 12:21:05.100042 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:21:05.100057 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:21:05.100074 kernel: clocksource: Switched to clocksource tsc Jan 17 12:21:05.100099 kernel: Initialise system trusted keyrings Jan 17 12:21:05.100117 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 12:21:05.100136 kernel: Key type asymmetric registered Jan 17 12:21:05.100155 kernel: Asymmetric key parser 'x509' registered Jan 17 12:21:05.100174 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:21:05.100190 kernel: io scheduler mq-deadline registered Jan 17 12:21:05.100208 kernel: io scheduler kyber registered Jan 17 12:21:05.100226 kernel: io scheduler bfq registered Jan 17 12:21:05.100243 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:21:05.100267 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:21:05.100467 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 12:21:05.100493 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 12:21:05.100675 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 12:21:05.100709 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:21:05.100891 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 12:21:05.100928 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:21:05.100957 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:21:05.100977 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 12:21:05.101003 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 12:21:05.101022 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 12:21:05.101217 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 12:21:05.101245 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:21:05.101264 kernel: i8042: Warning: Keylock active Jan 17 12:21:05.101283 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:21:05.101302 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:21:05.101489 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 12:21:05.101667 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 12:21:05.101847 kernel: rtc_cmos 00:00: setting system clock to 2025-01-17T12:21:04 UTC (1737116464) Jan 17 12:21:05.102067 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 12:21:05.102092 kernel: intel_pstate: CPU model not supported Jan 17 12:21:05.102110 kernel: pstore: Using crash dump compression: deflate Jan 17 12:21:05.102129 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:21:05.102146 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:21:05.102164 kernel: Segment Routing with IPv6 Jan 17 12:21:05.102188 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:21:05.102206 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:21:05.102223 kernel: Key type dns_resolver registered Jan 17 12:21:05.102240 kernel: IPI shorthand broadcast: enabled Jan 17 12:21:05.102259 kernel: sched_clock: Marking stable (853035991, 135448446)->(1005414435, -16929998) Jan 17 12:21:05.102277 kernel: registered taskstats version 1 Jan 17 12:21:05.102295 kernel: Loading compiled-in X.509 certificates Jan 17 12:21:05.102312 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:21:05.102329 kernel: Key type .fscrypt registered Jan 17 12:21:05.102352 kernel: Key type fscrypt-provisioning registered Jan 17 12:21:05.102370 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:21:05.102388 kernel: ima: No architecture policies found Jan 17 12:21:05.102405 kernel: clk: Disabling unused clocks Jan 17 12:21:05.102423 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:21:05.102441 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:21:05.102459 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:21:05.102475 kernel: Run /init as init process Jan 17 12:21:05.102497 kernel: with arguments: Jan 17 12:21:05.102514 kernel: /init Jan 17 12:21:05.102531 kernel: with environment: Jan 17 12:21:05.102549 kernel: HOME=/ Jan 17 12:21:05.102566 kernel: TERM=linux Jan 17 12:21:05.102585 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:21:05.102603 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:21:05.102625 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:21:05.102650 systemd[1]: Detected virtualization google. Jan 17 12:21:05.102669 systemd[1]: Detected architecture x86-64. Jan 17 12:21:05.102698 systemd[1]: Running in initrd. Jan 17 12:21:05.102717 systemd[1]: No hostname configured, using default hostname. Jan 17 12:21:05.102733 systemd[1]: Hostname set to . Jan 17 12:21:05.102753 systemd[1]: Initializing machine ID from random generator. Jan 17 12:21:05.102774 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:21:05.102794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:05.102820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:05.102843 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:21:05.102863 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:21:05.102883 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:21:05.102903 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:21:05.102952 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:21:05.102974 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:21:05.102995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:05.103014 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:05.103053 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:21:05.103073 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:21:05.103099 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:21:05.103123 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:21:05.103154 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:05.103176 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:05.103195 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:21:05.103216 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:21:05.103237 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:05.103257 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:05.103278 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:05.103299 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:21:05.103323 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:21:05.103344 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:21:05.103365 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:21:05.103386 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:21:05.103406 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:21:05.103427 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:21:05.103446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:05.103467 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:05.103532 systemd-journald[183]: Collecting audit messages is disabled. Jan 17 12:21:05.103581 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:05.103603 systemd-journald[183]: Journal started Jan 17 12:21:05.103647 systemd-journald[183]: Runtime Journal (/run/log/journal/f6559d4ee2d24ea8be446b300cb2ac5c) is 8.0M, max 148.7M, 140.7M free. Jan 17 12:21:05.105982 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:21:05.111083 systemd-modules-load[184]: Inserted module 'overlay' Jan 17 12:21:05.111410 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:21:05.121520 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:21:05.140141 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:21:05.153177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:05.160166 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:05.163297 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:05.173131 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:21:05.173170 kernel: Bridge firewalling registered Jan 17 12:21:05.173975 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 17 12:21:05.179069 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:05.184625 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:05.203166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:05.213166 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:21:05.226807 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:05.237226 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:21:05.245421 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:05.250557 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:05.262188 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:21:05.282954 dracut-cmdline[211]: dracut-dracut-053 Jan 17 12:21:05.288389 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:05.315749 systemd-resolved[216]: Positive Trust Anchors: Jan 17 12:21:05.315778 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:21:05.315847 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:21:05.323089 systemd-resolved[216]: Defaulting to hostname 'linux'. Jan 17 12:21:05.324807 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:21:05.345189 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:05.393959 kernel: SCSI subsystem initialized Jan 17 12:21:05.404942 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:21:05.416950 kernel: iscsi: registered transport (tcp) Jan 17 12:21:05.440967 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:21:05.441049 kernel: QLogic iSCSI HBA Driver Jan 17 12:21:05.493983 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:05.508100 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:21:05.536949 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:21:05.537040 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:21:05.537069 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:21:05.583965 kernel: raid6: avx2x4 gen() 17854 MB/s Jan 17 12:21:05.600953 kernel: raid6: avx2x2 gen() 17895 MB/s Jan 17 12:21:05.618318 kernel: raid6: avx2x1 gen() 13756 MB/s Jan 17 12:21:05.618372 kernel: raid6: using algorithm avx2x2 gen() 17895 MB/s Jan 17 12:21:05.636373 kernel: raid6: .... xor() 17696 MB/s, rmw enabled Jan 17 12:21:05.636456 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:21:05.659955 kernel: xor: automatically using best checksumming function avx Jan 17 12:21:05.831961 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:21:05.845800 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:05.856141 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:05.873745 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 17 12:21:05.880838 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:05.892140 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:21:05.921726 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 17 12:21:05.960845 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:05.967168 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:21:06.064869 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:06.077186 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:21:06.114383 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:06.128414 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:06.133038 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:06.137649 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:21:06.149563 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:21:06.190448 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:06.208019 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:21:06.226954 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:21:06.232947 kernel: AES CTR mode by8 optimization enabled Jan 17 12:21:06.240959 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:21:06.274108 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 12:21:06.294830 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:06.295049 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:06.300337 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:06.304002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:06.304254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:06.308091 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:06.322343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:06.345800 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 17 12:21:06.362068 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 12:21:06.362327 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 12:21:06.362587 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 12:21:06.362839 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:21:06.363087 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:21:06.363116 kernel: GPT:17805311 != 25165823 Jan 17 12:21:06.363140 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:21:06.363163 kernel: GPT:17805311 != 25165823 Jan 17 12:21:06.363187 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:21:06.363212 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:21:06.363248 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 12:21:06.373725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:06.385284 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:06.418961 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (463) Jan 17 12:21:06.433941 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (453) Jan 17 12:21:06.440808 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 12:21:06.454718 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 12:21:06.456803 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:06.479099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 12:21:06.485645 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 12:21:06.485812 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 12:21:06.501176 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:21:06.530153 disk-uuid[549]: Primary Header is updated. Jan 17 12:21:06.530153 disk-uuid[549]: Secondary Entries is updated. Jan 17 12:21:06.530153 disk-uuid[549]: Secondary Header is updated. Jan 17 12:21:06.545953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:21:06.555940 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:21:07.586941 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:21:07.587037 disk-uuid[550]: The operation has completed successfully. Jan 17 12:21:07.661267 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:21:07.661421 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:21:07.685176 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:21:07.722365 sh[567]: Success Jan 17 12:21:07.744954 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:21:07.825874 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:21:07.833281 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:21:07.852844 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:21:07.906668 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:21:07.906780 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:07.906808 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:21:07.916122 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:21:07.922948 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:21:07.953952 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:21:07.959245 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:21:07.969022 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:21:07.974207 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:21:08.000152 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:21:08.050886 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:08.050999 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:08.051026 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:21:08.074026 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:21:08.074115 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:21:08.089745 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:21:08.109416 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:08.115137 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:21:08.141207 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:21:08.181633 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:08.188166 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:21:08.259078 systemd-networkd[749]: lo: Link UP Jan 17 12:21:08.259092 systemd-networkd[749]: lo: Gained carrier Jan 17 12:21:08.261260 systemd-networkd[749]: Enumeration completed Jan 17 12:21:08.261720 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:21:08.261883 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:08.261890 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:21:08.263703 systemd-networkd[749]: eth0: Link UP Jan 17 12:21:08.263710 systemd-networkd[749]: eth0: Gained carrier Jan 17 12:21:08.263725 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:08.360619 ignition[705]: Ignition 2.19.0 Jan 17 12:21:08.276221 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.73/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 12:21:08.360628 ignition[705]: Stage: fetch-offline Jan 17 12:21:08.325427 systemd[1]: Reached target network.target - Network. Jan 17 12:21:08.360672 ignition[705]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:08.362816 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:08.360683 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:08.381204 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:21:08.360810 ignition[705]: parsed url from cmdline: "" Jan 17 12:21:08.445107 unknown[759]: fetched base config from "system" Jan 17 12:21:08.360815 ignition[705]: no config URL provided Jan 17 12:21:08.445121 unknown[759]: fetched base config from "system" Jan 17 12:21:08.360821 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:21:08.445133 unknown[759]: fetched user config from "gcp" Jan 17 12:21:08.360831 ignition[705]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:21:08.447549 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:21:08.360839 ignition[705]: failed to fetch config: resource requires networking Jan 17 12:21:08.465189 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:21:08.361152 ignition[705]: Ignition finished successfully Jan 17 12:21:08.501428 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:21:08.432816 ignition[759]: Ignition 2.19.0 Jan 17 12:21:08.524204 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:21:08.432825 ignition[759]: Stage: fetch Jan 17 12:21:08.581816 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:21:08.433052 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:08.599344 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:08.433064 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:08.617125 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:21:08.433198 ignition[759]: parsed url from cmdline: "" Jan 17 12:21:08.635138 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:21:08.433206 ignition[759]: no config URL provided Jan 17 12:21:08.649128 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:21:08.433215 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:21:08.649259 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:21:08.433225 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:21:08.682148 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:21:08.433248 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 12:21:08.438097 ignition[759]: GET result: OK Jan 17 12:21:08.438219 ignition[759]: parsing config with SHA512: e709042740a35f923ebf3c827261db5e8586927af85525a8e2ca807932a68c697a6324c9ef5e59063187cfb1129179c3a4c5d51aafa120fa8782c25443fd38b8 Jan 17 12:21:08.445700 ignition[759]: fetch: fetch complete Jan 17 12:21:08.445709 ignition[759]: fetch: fetch passed Jan 17 12:21:08.445764 ignition[759]: Ignition finished successfully Jan 17 12:21:08.490230 ignition[766]: Ignition 2.19.0 Jan 17 12:21:08.490239 ignition[766]: Stage: kargs Jan 17 12:21:08.490444 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:08.490456 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:08.491577 ignition[766]: kargs: kargs passed Jan 17 12:21:08.491634 ignition[766]: Ignition finished successfully Jan 17 12:21:08.579152 ignition[772]: Ignition 2.19.0 Jan 17 12:21:08.579163 ignition[772]: Stage: disks Jan 17 12:21:08.579496 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:08.579510 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:08.580590 ignition[772]: disks: disks passed Jan 17 12:21:08.580668 ignition[772]: Ignition finished successfully Jan 17 12:21:08.725679 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:21:08.883005 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:21:08.889369 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:21:09.024960 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:21:09.025830 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:21:09.034834 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:21:09.059077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:21:09.069197 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:21:09.093606 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:21:09.166108 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 17 12:21:09.166174 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:09.166203 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:09.166228 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:21:09.166249 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:21:09.166274 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:21:09.093708 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:21:09.093753 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:09.137751 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:21:09.176044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:21:09.207321 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:21:09.319761 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:21:09.330088 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:21:09.340072 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:21:09.350105 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:21:09.486364 systemd-networkd[749]: eth0: Gained IPv6LL Jan 17 12:21:09.493761 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:09.499076 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:21:09.538943 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:09.540255 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:21:09.550806 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:21:09.575314 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:21:09.593127 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:21:09.612223 ignition[900]: INFO : Ignition 2.19.0 Jan 17 12:21:09.612223 ignition[900]: INFO : Stage: mount Jan 17 12:21:09.612223 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:09.612223 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:09.612223 ignition[900]: INFO : mount: mount passed Jan 17 12:21:09.612223 ignition[900]: INFO : Ignition finished successfully Jan 17 12:21:09.611091 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:21:10.032232 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:21:10.078960 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (912) Jan 17 12:21:10.096368 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:10.096471 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:10.096498 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:21:10.119943 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:21:10.120035 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:21:10.123341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:21:10.167495 ignition[929]: INFO : Ignition 2.19.0 Jan 17 12:21:10.167495 ignition[929]: INFO : Stage: files Jan 17 12:21:10.182188 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:10.182188 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:10.182188 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:21:10.182188 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:21:10.182188 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:21:10.182188 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:21:10.182188 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:21:10.182188 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:21:10.178376 unknown[929]: wrote ssh authorized keys file for user: core Jan 17 12:21:10.284236 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:21:10.284236 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:21:10.284236 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:21:10.284236 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:21:10.349110 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:21:10.459555 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:21:13.925185 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:21:14.288173 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:14.288173 ignition[929]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:14.328202 ignition[929]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:14.328202 ignition[929]: INFO : files: files passed Jan 17 12:21:14.328202 ignition[929]: INFO : Ignition finished successfully Jan 17 12:21:14.294866 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:21:14.324355 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:21:14.345252 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:21:14.389672 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:21:14.609134 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:14.609134 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:14.389801 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:21:14.665102 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:14.409638 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:14.434559 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:21:14.466264 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:21:14.581822 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:21:14.581983 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:21:14.602008 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:21:14.620332 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:21:14.634411 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:21:14.641269 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:21:14.711071 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:14.730145 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:21:14.765836 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:14.778321 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:14.799403 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:21:14.817296 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:21:14.817514 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:14.852373 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:21:14.873310 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:21:14.892360 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:21:14.912286 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:14.933381 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:14.952372 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:21:14.970293 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:14.992371 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:21:15.012308 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:21:15.030377 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:21:15.049309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:21:15.049476 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:15.077480 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:15.087491 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:15.106415 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:21:15.106581 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:15.237248 ignition[981]: INFO : Ignition 2.19.0 Jan 17 12:21:15.237248 ignition[981]: INFO : Stage: umount Jan 17 12:21:15.237248 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:15.237248 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:15.237248 ignition[981]: INFO : umount: umount passed Jan 17 12:21:15.237248 ignition[981]: INFO : Ignition finished successfully Jan 17 12:21:15.124560 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:21:15.124810 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:15.157420 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:21:15.157653 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:15.165510 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:21:15.165717 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:21:15.192311 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:21:15.249237 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:21:15.252309 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:21:15.252538 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:15.318357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:21:15.318739 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:15.354238 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:21:15.355309 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:21:15.355428 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:21:15.371861 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:21:15.372022 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:21:15.391403 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:21:15.391530 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:21:15.399180 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:21:15.399243 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:21:15.425291 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:21:15.425370 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:21:15.433362 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:21:15.433431 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:21:15.450337 systemd[1]: Stopped target network.target - Network. Jan 17 12:21:15.468351 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:21:15.468438 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:15.483382 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:21:15.501280 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:21:15.505075 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:15.517326 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:21:15.535286 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:21:15.551407 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:21:15.551469 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:15.577298 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:21:15.577364 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:15.586393 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:21:15.586475 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:21:15.603389 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:21:15.603465 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:15.622388 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:21:15.622467 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:15.639592 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:21:15.644009 systemd-networkd[749]: eth0: DHCPv6 lease lost Jan 17 12:21:15.666333 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:21:15.684578 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:21:15.684723 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:21:15.703565 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:21:15.704017 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:21:15.721497 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:21:15.721575 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:15.743065 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:21:15.747232 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:21:15.747315 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:15.794301 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:21:15.794391 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:15.814296 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:21:15.814374 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:15.837286 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:21:15.837368 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:16.285105 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 17 12:21:15.845455 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:15.869427 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:21:15.869615 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:15.899479 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:21:15.899595 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:21:15.918930 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:21:15.919044 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:15.937190 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:21:15.937272 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:15.955183 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:21:15.955296 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:15.983072 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:21:15.983194 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:16.013108 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:16.013234 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:16.048153 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:21:16.051224 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:21:16.051306 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:16.099278 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:21:16.099363 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:16.118287 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:21:16.118367 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:16.139284 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:16.139367 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:16.147886 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:21:16.148052 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:21:16.165861 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:21:16.188198 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:21:16.237448 systemd[1]: Switching root. Jan 17 12:21:16.582107 systemd-journald[183]: Journal stopped Jan 17 12:21:05.090039 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:21:05.090090 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:05.090108 kernel: BIOS-provided physical RAM map: Jan 17 12:21:05.090122 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 12:21:05.090136 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 12:21:05.090149 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 12:21:05.090167 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 12:21:05.090186 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 12:21:05.090200 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 12:21:05.090214 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 12:21:05.090229 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 12:21:05.090244 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 12:21:05.090258 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 12:21:05.090274 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 12:21:05.090297 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 12:21:05.090314 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 12:21:05.090331 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 12:21:05.090347 kernel: NX (Execute Disable) protection: active Jan 17 12:21:05.090363 kernel: APIC: Static calls initialized Jan 17 12:21:05.090379 kernel: efi: EFI v2.7 by EDK II Jan 17 12:21:05.090395 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 17 12:21:05.090411 kernel: SMBIOS 2.4 present. Jan 17 12:21:05.090427 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 17 12:21:05.090442 kernel: Hypervisor detected: KVM Jan 17 12:21:05.090463 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:21:05.090480 kernel: kvm-clock: using sched offset of 12322430719 cycles Jan 17 12:21:05.090498 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:21:05.090515 kernel: tsc: Detected 2299.998 MHz processor Jan 17 12:21:05.090532 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:21:05.090550 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:21:05.090567 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 12:21:05.090584 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 12:21:05.090601 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:21:05.090622 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 12:21:05.090639 kernel: Using GB pages for direct mapping Jan 17 12:21:05.090655 kernel: Secure boot disabled Jan 17 12:21:05.090672 kernel: ACPI: Early table checksum verification disabled Jan 17 12:21:05.090696 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 12:21:05.090713 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 12:21:05.090731 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 12:21:05.090755 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 12:21:05.090777 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 12:21:05.090795 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 17 12:21:05.090813 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 12:21:05.090831 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 12:21:05.090850 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 12:21:05.090868 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 12:21:05.090889 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 12:21:05.090907 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 12:21:05.090950 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 12:21:05.090968 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 12:21:05.090986 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 12:21:05.091004 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 12:21:05.091022 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 12:21:05.091040 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 12:21:05.091056 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 12:21:05.091077 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 12:21:05.091095 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:21:05.091113 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:21:05.091132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:21:05.091149 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 12:21:05.091167 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 12:21:05.091185 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 12:21:05.091202 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 12:21:05.091219 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 17 12:21:05.091241 kernel: Zone ranges: Jan 17 12:21:05.091258 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:21:05.091274 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:21:05.091292 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:21:05.091310 kernel: Movable zone start for each node Jan 17 12:21:05.091328 kernel: Early memory node ranges Jan 17 12:21:05.091346 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 12:21:05.091364 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 12:21:05.091382 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 12:21:05.091404 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 12:21:05.091422 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:21:05.091440 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 12:21:05.091458 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:21:05.091476 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 12:21:05.091495 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 12:21:05.091513 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 12:21:05.091531 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 12:21:05.091549 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 12:21:05.091571 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:21:05.091589 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:21:05.091606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:21:05.091624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:21:05.091643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:21:05.091661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:21:05.091679 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:21:05.091704 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:21:05.091723 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:21:05.091745 kernel: Booting paravirtualized kernel on KVM Jan 17 12:21:05.091764 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:21:05.091782 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:21:05.091800 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:21:05.091817 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:21:05.091834 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:21:05.091852 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:21:05.091871 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:21:05.091891 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:05.091927 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:21:05.091942 kernel: random: crng init done Jan 17 12:21:05.091957 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 12:21:05.091971 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:21:05.091984 kernel: Fallback order for Node 0: 0 Jan 17 12:21:05.091998 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 12:21:05.092014 kernel: Policy zone: Normal Jan 17 12:21:05.092028 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:21:05.092048 kernel: software IO TLB: area num 2. Jan 17 12:21:05.092063 kernel: Memory: 7513376K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 346948K reserved, 0K cma-reserved) Jan 17 12:21:05.092077 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:21:05.092092 kernel: Kernel/User page tables isolation: enabled Jan 17 12:21:05.092106 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:21:05.092121 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:21:05.092135 kernel: Dynamic Preempt: voluntary Jan 17 12:21:05.092151 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:21:05.092174 kernel: rcu: RCU event tracing is enabled. Jan 17 12:21:05.092209 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:21:05.092226 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:21:05.092243 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:21:05.092264 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:21:05.092282 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:21:05.092299 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:21:05.092318 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:21:05.092335 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:21:05.092372 kernel: Console: colour dummy device 80x25 Jan 17 12:21:05.092395 kernel: printk: console [ttyS0] enabled Jan 17 12:21:05.092414 kernel: ACPI: Core revision 20230628 Jan 17 12:21:05.092452 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:21:05.092469 kernel: x2apic enabled Jan 17 12:21:05.092486 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:21:05.092503 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 12:21:05.092523 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:21:05.092542 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 12:21:05.092565 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 12:21:05.092584 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 12:21:05.092603 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:21:05.092621 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:21:05.092640 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:21:05.092657 kernel: Spectre V2 : Mitigation: IBRS Jan 17 12:21:05.092676 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:21:05.092703 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:21:05.092722 kernel: RETBleed: Mitigation: IBRS Jan 17 12:21:05.092745 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:21:05.092764 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 12:21:05.092782 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:21:05.092801 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:21:05.092819 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:21:05.092837 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:21:05.092856 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:21:05.092874 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:21:05.092893 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:21:05.092932 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:21:05.092952 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:21:05.092970 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:21:05.092989 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:21:05.093007 kernel: landlock: Up and running. Jan 17 12:21:05.093025 kernel: SELinux: Initializing. Jan 17 12:21:05.093043 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:21:05.093062 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:21:05.093080 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 12:21:05.093104 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:05.093122 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:05.093141 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:05.093160 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 12:21:05.093179 kernel: signal: max sigframe size: 1776 Jan 17 12:21:05.093197 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:21:05.093216 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:21:05.093234 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:21:05.093253 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:21:05.093275 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:21:05.093294 kernel: .... node #0, CPUs: #1 Jan 17 12:21:05.093313 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 12:21:05.093333 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:21:05.093351 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:21:05.093370 kernel: smpboot: Max logical packages: 1 Jan 17 12:21:05.093388 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 12:21:05.093406 kernel: devtmpfs: initialized Jan 17 12:21:05.093428 kernel: x86/mm: Memory block size: 128MB Jan 17 12:21:05.093447 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 12:21:05.093466 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:21:05.093484 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:21:05.093503 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:21:05.093523 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:21:05.093541 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:21:05.093560 kernel: audit: type=2000 audit(1737116464.137:1): state=initialized audit_enabled=0 res=1 Jan 17 12:21:05.093577 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:21:05.093599 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:21:05.093618 kernel: cpuidle: using governor menu Jan 17 12:21:05.093636 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:21:05.093655 kernel: dca service started, version 1.12.1 Jan 17 12:21:05.093673 kernel: PCI: Using configuration type 1 for base access Jan 17 12:21:05.093697 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:21:05.093715 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:21:05.093745 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:21:05.093765 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:21:05.093789 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:21:05.093806 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:21:05.093823 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:21:05.093844 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:21:05.093863 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:21:05.093882 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 12:21:05.093902 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:21:05.093954 kernel: ACPI: Interpreter enabled Jan 17 12:21:05.093974 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:21:05.093997 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:21:05.094018 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:21:05.094038 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 12:21:05.094057 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 12:21:05.094076 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:21:05.094360 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:21:05.094572 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:21:05.094772 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:21:05.094797 kernel: PCI host bridge to bus 0000:00 Jan 17 12:21:05.095024 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:21:05.095206 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:21:05.095373 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:21:05.095537 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 12:21:05.095707 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:21:05.095910 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:21:05.096143 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 12:21:05.096330 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:21:05.096508 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 12:21:05.096721 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 12:21:05.096904 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 12:21:05.097142 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 12:21:05.097333 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:21:05.097522 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 12:21:05.097730 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 12:21:05.097976 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:21:05.098171 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 12:21:05.098355 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 12:21:05.098387 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:21:05.098408 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:21:05.098429 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:21:05.098447 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:21:05.098467 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:21:05.098487 kernel: iommu: Default domain type: Translated Jan 17 12:21:05.098507 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:21:05.098526 kernel: efivars: Registered efivars operations Jan 17 12:21:05.098546 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:21:05.098570 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:21:05.098590 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 12:21:05.098610 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 12:21:05.098629 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 12:21:05.098649 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 12:21:05.098668 kernel: vgaarb: loaded Jan 17 12:21:05.098695 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:21:05.098715 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:21:05.098736 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:21:05.098760 kernel: pnp: PnP ACPI init Jan 17 12:21:05.098780 kernel: pnp: PnP ACPI: found 7 devices Jan 17 12:21:05.098801 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:21:05.098821 kernel: NET: Registered PF_INET protocol family Jan 17 12:21:05.098840 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:21:05.098861 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 12:21:05.098881 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:21:05.098900 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:21:05.098945 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 12:21:05.098970 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 12:21:05.098990 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:21:05.099010 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:21:05.099030 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:21:05.099050 kernel: NET: Registered PF_XDP protocol family Jan 17 12:21:05.099234 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:21:05.099400 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:21:05.099566 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:21:05.099747 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 12:21:05.099969 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:21:05.099993 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:21:05.100009 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:21:05.100026 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 12:21:05.100042 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:21:05.100057 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:21:05.100074 kernel: clocksource: Switched to clocksource tsc Jan 17 12:21:05.100099 kernel: Initialise system trusted keyrings Jan 17 12:21:05.100117 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 12:21:05.100136 kernel: Key type asymmetric registered Jan 17 12:21:05.100155 kernel: Asymmetric key parser 'x509' registered Jan 17 12:21:05.100174 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:21:05.100190 kernel: io scheduler mq-deadline registered Jan 17 12:21:05.100208 kernel: io scheduler kyber registered Jan 17 12:21:05.100226 kernel: io scheduler bfq registered Jan 17 12:21:05.100243 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:21:05.100267 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:21:05.100467 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 12:21:05.100493 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 12:21:05.100675 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 12:21:05.100709 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:21:05.100891 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 12:21:05.100928 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:21:05.100957 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:21:05.100977 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 12:21:05.101003 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 12:21:05.101022 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 12:21:05.101217 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 12:21:05.101245 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:21:05.101264 kernel: i8042: Warning: Keylock active Jan 17 12:21:05.101283 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:21:05.101302 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:21:05.101489 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 12:21:05.101667 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 12:21:05.101847 kernel: rtc_cmos 00:00: setting system clock to 2025-01-17T12:21:04 UTC (1737116464) Jan 17 12:21:05.102067 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 12:21:05.102092 kernel: intel_pstate: CPU model not supported Jan 17 12:21:05.102110 kernel: pstore: Using crash dump compression: deflate Jan 17 12:21:05.102129 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:21:05.102146 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:21:05.102164 kernel: Segment Routing with IPv6 Jan 17 12:21:05.102188 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:21:05.102206 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:21:05.102223 kernel: Key type dns_resolver registered Jan 17 12:21:05.102240 kernel: IPI shorthand broadcast: enabled Jan 17 12:21:05.102259 kernel: sched_clock: Marking stable (853035991, 135448446)->(1005414435, -16929998) Jan 17 12:21:05.102277 kernel: registered taskstats version 1 Jan 17 12:21:05.102295 kernel: Loading compiled-in X.509 certificates Jan 17 12:21:05.102312 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:21:05.102329 kernel: Key type .fscrypt registered Jan 17 12:21:05.102352 kernel: Key type fscrypt-provisioning registered Jan 17 12:21:05.102370 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:21:05.102388 kernel: ima: No architecture policies found Jan 17 12:21:05.102405 kernel: clk: Disabling unused clocks Jan 17 12:21:05.102423 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:21:05.102441 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:21:05.102459 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:21:05.102475 kernel: Run /init as init process Jan 17 12:21:05.102497 kernel: with arguments: Jan 17 12:21:05.102514 kernel: /init Jan 17 12:21:05.102531 kernel: with environment: Jan 17 12:21:05.102549 kernel: HOME=/ Jan 17 12:21:05.102566 kernel: TERM=linux Jan 17 12:21:05.102585 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:21:05.102603 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:21:05.102625 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:21:05.102650 systemd[1]: Detected virtualization google. Jan 17 12:21:05.102669 systemd[1]: Detected architecture x86-64. Jan 17 12:21:05.102698 systemd[1]: Running in initrd. Jan 17 12:21:05.102717 systemd[1]: No hostname configured, using default hostname. Jan 17 12:21:05.102733 systemd[1]: Hostname set to . Jan 17 12:21:05.102753 systemd[1]: Initializing machine ID from random generator. Jan 17 12:21:05.102774 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:21:05.102794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:05.102820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:05.102843 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:21:05.102863 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:21:05.102883 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:21:05.102903 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:21:05.102952 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:21:05.102974 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:21:05.102995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:05.103014 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:05.103053 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:21:05.103073 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:21:05.103099 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:21:05.103123 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:21:05.103154 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:05.103176 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:05.103195 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:21:05.103216 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:21:05.103237 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:05.103257 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:05.103278 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:05.103299 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:21:05.103323 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:21:05.103344 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:21:05.103365 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:21:05.103386 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:21:05.103406 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:21:05.103427 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:21:05.103446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:05.103467 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:05.103532 systemd-journald[183]: Collecting audit messages is disabled. Jan 17 12:21:05.103581 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:05.103603 systemd-journald[183]: Journal started Jan 17 12:21:05.103647 systemd-journald[183]: Runtime Journal (/run/log/journal/f6559d4ee2d24ea8be446b300cb2ac5c) is 8.0M, max 148.7M, 140.7M free. Jan 17 12:21:05.105982 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:21:05.111083 systemd-modules-load[184]: Inserted module 'overlay' Jan 17 12:21:05.111410 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:21:05.121520 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:21:05.140141 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:21:05.153177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:05.160166 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:05.163297 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:05.173131 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:21:05.173170 kernel: Bridge firewalling registered Jan 17 12:21:05.173975 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 17 12:21:05.179069 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:05.184625 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:05.203166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:05.213166 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:21:05.226807 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:05.237226 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:21:05.245421 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:05.250557 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:05.262188 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:21:05.282954 dracut-cmdline[211]: dracut-dracut-053 Jan 17 12:21:05.288389 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:05.315749 systemd-resolved[216]: Positive Trust Anchors: Jan 17 12:21:05.315778 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:21:05.315847 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:21:05.323089 systemd-resolved[216]: Defaulting to hostname 'linux'. Jan 17 12:21:05.324807 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:21:05.345189 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:05.393959 kernel: SCSI subsystem initialized Jan 17 12:21:05.404942 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:21:05.416950 kernel: iscsi: registered transport (tcp) Jan 17 12:21:05.440967 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:21:05.441049 kernel: QLogic iSCSI HBA Driver Jan 17 12:21:05.493983 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:05.508100 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:21:05.536949 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:21:05.537040 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:21:05.537069 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:21:05.583965 kernel: raid6: avx2x4 gen() 17854 MB/s Jan 17 12:21:05.600953 kernel: raid6: avx2x2 gen() 17895 MB/s Jan 17 12:21:05.618318 kernel: raid6: avx2x1 gen() 13756 MB/s Jan 17 12:21:05.618372 kernel: raid6: using algorithm avx2x2 gen() 17895 MB/s Jan 17 12:21:05.636373 kernel: raid6: .... xor() 17696 MB/s, rmw enabled Jan 17 12:21:05.636456 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:21:05.659955 kernel: xor: automatically using best checksumming function avx Jan 17 12:21:05.831961 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:21:05.845800 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:05.856141 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:05.873745 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 17 12:21:05.880838 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:05.892140 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:21:05.921726 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 17 12:21:05.960845 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:05.967168 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:21:06.064869 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:06.077186 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:21:06.114383 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:06.128414 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:06.133038 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:06.137649 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:21:06.149563 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:21:06.190448 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:06.208019 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:21:06.226954 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:21:06.232947 kernel: AES CTR mode by8 optimization enabled Jan 17 12:21:06.240959 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:21:06.274108 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 12:21:06.294830 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:06.295049 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:06.300337 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:06.304002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:06.304254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:06.308091 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:06.322343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:06.345800 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 17 12:21:06.362068 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 12:21:06.362327 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 12:21:06.362587 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 12:21:06.362839 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:21:06.363087 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:21:06.363116 kernel: GPT:17805311 != 25165823 Jan 17 12:21:06.363140 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:21:06.363163 kernel: GPT:17805311 != 25165823 Jan 17 12:21:06.363187 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:21:06.363212 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:21:06.363248 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 12:21:06.373725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:06.385284 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:06.418961 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (463) Jan 17 12:21:06.433941 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (453) Jan 17 12:21:06.440808 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 12:21:06.454718 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 12:21:06.456803 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:06.479099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 12:21:06.485645 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 12:21:06.485812 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 12:21:06.501176 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:21:06.530153 disk-uuid[549]: Primary Header is updated. Jan 17 12:21:06.530153 disk-uuid[549]: Secondary Entries is updated. Jan 17 12:21:06.530153 disk-uuid[549]: Secondary Header is updated. Jan 17 12:21:06.545953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:21:06.555940 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:21:07.586941 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:21:07.587037 disk-uuid[550]: The operation has completed successfully. Jan 17 12:21:07.661267 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:21:07.661421 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:21:07.685176 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:21:07.722365 sh[567]: Success Jan 17 12:21:07.744954 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:21:07.825874 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:21:07.833281 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:21:07.852844 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:21:07.906668 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:21:07.906780 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:07.906808 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:21:07.916122 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:21:07.922948 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:21:07.953952 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:21:07.959245 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:21:07.969022 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:21:07.974207 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:21:08.000152 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:21:08.050886 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:08.050999 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:08.051026 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:21:08.074026 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:21:08.074115 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:21:08.089745 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:21:08.109416 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:08.115137 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:21:08.141207 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:21:08.181633 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:08.188166 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:21:08.259078 systemd-networkd[749]: lo: Link UP Jan 17 12:21:08.259092 systemd-networkd[749]: lo: Gained carrier Jan 17 12:21:08.261260 systemd-networkd[749]: Enumeration completed Jan 17 12:21:08.261720 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:21:08.261883 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:08.261890 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:21:08.263703 systemd-networkd[749]: eth0: Link UP Jan 17 12:21:08.263710 systemd-networkd[749]: eth0: Gained carrier Jan 17 12:21:08.263725 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:08.360619 ignition[705]: Ignition 2.19.0 Jan 17 12:21:08.276221 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.73/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 12:21:08.360628 ignition[705]: Stage: fetch-offline Jan 17 12:21:08.325427 systemd[1]: Reached target network.target - Network. Jan 17 12:21:08.360672 ignition[705]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:08.362816 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:08.360683 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:08.381204 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:21:08.360810 ignition[705]: parsed url from cmdline: "" Jan 17 12:21:08.445107 unknown[759]: fetched base config from "system" Jan 17 12:21:08.360815 ignition[705]: no config URL provided Jan 17 12:21:08.445121 unknown[759]: fetched base config from "system" Jan 17 12:21:08.360821 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:21:08.445133 unknown[759]: fetched user config from "gcp" Jan 17 12:21:08.360831 ignition[705]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:21:08.447549 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:21:08.360839 ignition[705]: failed to fetch config: resource requires networking Jan 17 12:21:08.465189 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:21:08.361152 ignition[705]: Ignition finished successfully Jan 17 12:21:08.501428 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:21:08.432816 ignition[759]: Ignition 2.19.0 Jan 17 12:21:08.524204 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:21:08.432825 ignition[759]: Stage: fetch Jan 17 12:21:08.581816 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:21:08.433052 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:08.599344 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:08.433064 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:08.617125 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:21:08.433198 ignition[759]: parsed url from cmdline: "" Jan 17 12:21:08.635138 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:21:08.433206 ignition[759]: no config URL provided Jan 17 12:21:08.649128 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:21:08.433215 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:21:08.649259 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:21:08.433225 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:21:08.682148 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:21:08.433248 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 12:21:08.438097 ignition[759]: GET result: OK Jan 17 12:21:08.438219 ignition[759]: parsing config with SHA512: e709042740a35f923ebf3c827261db5e8586927af85525a8e2ca807932a68c697a6324c9ef5e59063187cfb1129179c3a4c5d51aafa120fa8782c25443fd38b8 Jan 17 12:21:08.445700 ignition[759]: fetch: fetch complete Jan 17 12:21:08.445709 ignition[759]: fetch: fetch passed Jan 17 12:21:08.445764 ignition[759]: Ignition finished successfully Jan 17 12:21:08.490230 ignition[766]: Ignition 2.19.0 Jan 17 12:21:08.490239 ignition[766]: Stage: kargs Jan 17 12:21:08.490444 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:08.490456 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:08.491577 ignition[766]: kargs: kargs passed Jan 17 12:21:08.491634 ignition[766]: Ignition finished successfully Jan 17 12:21:08.579152 ignition[772]: Ignition 2.19.0 Jan 17 12:21:08.579163 ignition[772]: Stage: disks Jan 17 12:21:08.579496 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:08.579510 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:08.580590 ignition[772]: disks: disks passed Jan 17 12:21:08.580668 ignition[772]: Ignition finished successfully Jan 17 12:21:08.725679 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:21:08.883005 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:21:08.889369 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:21:09.024960 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:21:09.025830 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:21:09.034834 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:21:09.059077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:21:09.069197 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:21:09.093606 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:21:09.166108 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 17 12:21:09.166174 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:09.166203 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:09.166228 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:21:09.166249 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:21:09.166274 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:21:09.093708 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:21:09.093753 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:09.137751 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:21:09.176044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:21:09.207321 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:21:09.319761 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:21:09.330088 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:21:09.340072 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:21:09.350105 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:21:09.486364 systemd-networkd[749]: eth0: Gained IPv6LL Jan 17 12:21:09.493761 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:09.499076 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:21:09.538943 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:09.540255 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:21:09.550806 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:21:09.575314 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:21:09.593127 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:21:09.612223 ignition[900]: INFO : Ignition 2.19.0 Jan 17 12:21:09.612223 ignition[900]: INFO : Stage: mount Jan 17 12:21:09.612223 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:09.612223 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:09.612223 ignition[900]: INFO : mount: mount passed Jan 17 12:21:09.612223 ignition[900]: INFO : Ignition finished successfully Jan 17 12:21:09.611091 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:21:10.032232 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:21:10.078960 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (912) Jan 17 12:21:10.096368 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:10.096471 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:10.096498 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:21:10.119943 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:21:10.120035 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:21:10.123341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:21:10.167495 ignition[929]: INFO : Ignition 2.19.0 Jan 17 12:21:10.167495 ignition[929]: INFO : Stage: files Jan 17 12:21:10.182188 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:10.182188 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:10.182188 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:21:10.182188 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:21:10.182188 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:21:10.182188 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:21:10.182188 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:21:10.182188 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:21:10.178376 unknown[929]: wrote ssh authorized keys file for user: core Jan 17 12:21:10.284236 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:21:10.284236 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:21:10.284236 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:21:10.284236 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:21:10.349110 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:21:10.459555 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:10.476059 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:21:13.925185 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:21:14.288173 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:14.288173 ignition[929]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:14.328202 ignition[929]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:14.328202 ignition[929]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:14.328202 ignition[929]: INFO : files: files passed Jan 17 12:21:14.328202 ignition[929]: INFO : Ignition finished successfully Jan 17 12:21:14.294866 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:21:14.324355 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:21:14.345252 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:21:14.389672 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:21:14.609134 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:14.609134 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:14.389801 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:21:14.665102 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:14.409638 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:14.434559 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:21:14.466264 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:21:14.581822 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:21:14.581983 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:21:14.602008 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:21:14.620332 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:21:14.634411 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:21:14.641269 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:21:14.711071 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:14.730145 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:21:14.765836 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:14.778321 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:14.799403 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:21:14.817296 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:21:14.817514 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:14.852373 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:21:14.873310 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:21:14.892360 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:21:14.912286 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:14.933381 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:14.952372 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:21:14.970293 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:14.992371 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:21:15.012308 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:21:15.030377 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:21:15.049309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:21:15.049476 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:15.077480 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:15.087491 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:15.106415 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:21:15.106581 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:15.237248 ignition[981]: INFO : Ignition 2.19.0 Jan 17 12:21:15.237248 ignition[981]: INFO : Stage: umount Jan 17 12:21:15.237248 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:15.237248 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:15.237248 ignition[981]: INFO : umount: umount passed Jan 17 12:21:15.237248 ignition[981]: INFO : Ignition finished successfully Jan 17 12:21:15.124560 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:21:15.124810 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:15.157420 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:21:15.157653 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:15.165510 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:21:15.165717 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:21:15.192311 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:21:15.249237 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:21:15.252309 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:21:15.252538 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:15.318357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:21:15.318739 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:15.354238 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:21:15.355309 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:21:15.355428 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:21:15.371861 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:21:15.372022 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:21:15.391403 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:21:15.391530 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:21:15.399180 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:21:15.399243 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:21:15.425291 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:21:15.425370 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:21:15.433362 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:21:15.433431 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:21:15.450337 systemd[1]: Stopped target network.target - Network. Jan 17 12:21:15.468351 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:21:15.468438 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:15.483382 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:21:15.501280 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:21:15.505075 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:15.517326 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:21:15.535286 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:21:15.551407 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:21:15.551469 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:15.577298 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:21:15.577364 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:15.586393 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:21:15.586475 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:21:15.603389 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:21:15.603465 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:15.622388 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:21:15.622467 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:15.639592 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:21:15.644009 systemd-networkd[749]: eth0: DHCPv6 lease lost Jan 17 12:21:15.666333 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:21:15.684578 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:21:15.684723 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:21:15.703565 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:21:15.704017 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:21:15.721497 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:21:15.721575 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:15.743065 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:21:15.747232 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:21:15.747315 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:15.794301 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:21:15.794391 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:15.814296 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:21:15.814374 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:15.837286 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:21:15.837368 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:16.285105 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 17 12:21:15.845455 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:15.869427 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:21:15.869615 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:15.899479 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:21:15.899595 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:21:15.918930 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:21:15.919044 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:15.937190 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:21:15.937272 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:15.955183 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:21:15.955296 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:15.983072 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:21:15.983194 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:16.013108 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:16.013234 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:16.048153 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:21:16.051224 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:21:16.051306 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:16.099278 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:21:16.099363 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:16.118287 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:21:16.118367 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:16.139284 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:16.139367 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:16.147886 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:21:16.148052 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:21:16.165861 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:21:16.188198 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:21:16.237448 systemd[1]: Switching root. Jan 17 12:21:16.582107 systemd-journald[183]: Journal stopped Jan 17 12:21:19.029225 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:21:19.029293 kernel: SELinux: policy capability open_perms=1 Jan 17 12:21:19.029314 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:21:19.029331 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:21:19.029349 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:21:19.029368 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:21:19.029390 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:21:19.029415 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:21:19.029436 kernel: audit: type=1403 audit(1737116476.966:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:21:19.029459 systemd[1]: Successfully loaded SELinux policy in 91.327ms. Jan 17 12:21:19.029484 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.371ms. Jan 17 12:21:19.029507 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:21:19.029529 systemd[1]: Detected virtualization google. Jan 17 12:21:19.029551 systemd[1]: Detected architecture x86-64. Jan 17 12:21:19.029577 systemd[1]: Detected first boot. Jan 17 12:21:19.029601 systemd[1]: Initializing machine ID from random generator. Jan 17 12:21:19.029623 zram_generator::config[1039]: No configuration found. Jan 17 12:21:19.029647 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:21:19.029679 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:21:19.029705 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 12:21:19.029730 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:21:19.029753 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:21:19.029774 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:21:19.029795 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:21:19.029820 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:21:19.029843 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:21:19.029870 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:21:19.029893 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:21:19.030029 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:19.030059 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:19.030083 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:21:19.030105 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:21:19.030129 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:21:19.030153 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:21:19.030181 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:21:19.030205 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:19.030228 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:21:19.030251 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:19.030274 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:21:19.030295 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:21:19.030326 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:21:19.030349 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:21:19.030374 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:21:19.030404 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:21:19.030427 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:21:19.030451 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:19.030474 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:19.030497 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:19.030521 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:21:19.030544 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:21:19.030572 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:21:19.030597 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:21:19.030621 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:19.030647 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:21:19.030685 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:21:19.030710 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:21:19.030734 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:21:19.030758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:19.030782 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:21:19.030806 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:21:19.030829 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:19.030853 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:21:19.030876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:19.030905 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:21:19.030949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:19.030973 kernel: fuse: init (API version 7.39) Jan 17 12:21:19.030996 kernel: ACPI: bus type drm_connector registered Jan 17 12:21:19.031019 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:21:19.031042 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:21:19.031068 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:21:19.031092 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:21:19.031121 kernel: loop: module loaded Jan 17 12:21:19.031143 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:21:19.031166 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:21:19.031227 systemd-journald[1144]: Collecting audit messages is disabled. Jan 17 12:21:19.031280 systemd-journald[1144]: Journal started Jan 17 12:21:19.031325 systemd-journald[1144]: Runtime Journal (/run/log/journal/578c7b15336a4aaa9ee121df8a7c3831) is 8.0M, max 148.7M, 140.7M free. Jan 17 12:21:19.045948 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:21:19.077949 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:21:19.104954 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:19.115966 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:21:19.128637 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:21:19.139348 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:21:19.150394 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:21:19.160307 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:21:19.170353 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:21:19.180278 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:21:19.190632 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:21:19.202535 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:19.214468 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:21:19.214761 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:21:19.226468 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:19.226748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:19.238512 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:21:19.238800 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:21:19.249545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:19.249994 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:19.261491 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:21:19.261765 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:21:19.272549 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:19.272842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:19.283542 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:19.293661 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:21:19.305533 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:21:19.317533 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:19.341314 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:21:19.356071 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:21:19.379078 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:21:19.389096 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:21:19.398177 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:21:19.416183 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:21:19.427122 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:19.432148 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:21:19.441614 systemd-journald[1144]: Time spent on flushing to /var/log/journal/578c7b15336a4aaa9ee121df8a7c3831 is 93.052ms for 917 entries. Jan 17 12:21:19.441614 systemd-journald[1144]: System Journal (/var/log/journal/578c7b15336a4aaa9ee121df8a7c3831) is 8.0M, max 584.8M, 576.8M free. Jan 17 12:21:19.563264 systemd-journald[1144]: Received client request to flush runtime journal. Jan 17 12:21:19.450487 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:19.460174 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:19.482161 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:21:19.506161 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:21:19.521780 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:21:19.533270 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:21:19.544593 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:21:19.568276 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:19.580409 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:21:19.596763 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 17 12:21:19.598005 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 17 12:21:19.603893 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:21:19.616995 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:19.631634 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:21:19.641172 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:21:19.720546 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:21:19.740248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:21:19.781994 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 17 12:21:19.782497 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 17 12:21:19.792166 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:20.302152 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:21:20.319155 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:20.368112 systemd-udevd[1207]: Using default interface naming scheme 'v255'. Jan 17 12:21:20.408144 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:20.431225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:21:20.468271 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:21:20.500614 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:21:20.608419 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:21:20.662931 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 12:21:20.675953 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:21:20.717955 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:21:20.739941 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:21:20.748940 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:21:20.818759 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 17 12:21:20.818854 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:21:20.826214 systemd-networkd[1217]: lo: Link UP Jan 17 12:21:20.826726 systemd-networkd[1217]: lo: Gained carrier Jan 17 12:21:20.842961 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 12:21:20.842092 systemd-networkd[1217]: Enumeration completed Jan 17 12:21:20.842743 systemd-networkd[1217]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:20.842751 systemd-networkd[1217]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:21:20.843622 systemd-networkd[1217]: eth0: Link UP Jan 17 12:21:20.843639 systemd-networkd[1217]: eth0: Gained carrier Jan 17 12:21:20.843667 systemd-networkd[1217]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:20.846584 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:21:20.853007 systemd-networkd[1217]: eth0: DHCPv4 address 10.128.0.73/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 12:21:20.866017 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1216) Jan 17 12:21:20.877072 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:21:20.933887 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:21:20.960514 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 12:21:20.984185 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:21:21.006185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:21.025019 lvm[1248]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:21:21.067534 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:21:21.068601 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:21.078317 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:21:21.085849 lvm[1254]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:21:21.115751 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:21:21.117532 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:21:21.117672 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:21:21.117713 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:21:21.117801 systemd[1]: Reached target machines.target - Containers. Jan 17 12:21:21.120181 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:21:21.127210 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:21:21.135157 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:21:21.135421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:21.147190 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:21:21.197228 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:21:21.218604 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:21:21.230534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:21.245319 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:21:21.259461 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:21:21.262374 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:21:21.274939 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:21:21.283209 kernel: loop0: detected capacity change from 0 to 54824 Jan 17 12:21:21.326037 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:21:21.353973 kernel: loop1: detected capacity change from 0 to 211296 Jan 17 12:21:21.443011 kernel: loop2: detected capacity change from 0 to 142488 Jan 17 12:21:21.522981 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:21:21.603959 kernel: loop4: detected capacity change from 0 to 54824 Jan 17 12:21:21.639990 kernel: loop5: detected capacity change from 0 to 211296 Jan 17 12:21:21.681985 kernel: loop6: detected capacity change from 0 to 142488 Jan 17 12:21:21.726032 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 12:21:21.765938 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 17 12:21:21.766901 (sd-merge)[1279]: Merged extensions into '/usr'. Jan 17 12:21:21.774247 systemd[1]: Reloading requested from client PID 1267 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:21:21.774416 systemd[1]: Reloading... Jan 17 12:21:21.880003 zram_generator::config[1303]: No configuration found. Jan 17 12:21:21.996775 ldconfig[1259]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:21:22.093479 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:22.160069 systemd-networkd[1217]: eth0: Gained IPv6LL Jan 17 12:21:22.180270 systemd[1]: Reloading finished in 405 ms. Jan 17 12:21:22.198239 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:21:22.210713 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:21:22.221783 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:21:22.248249 systemd[1]: Starting ensure-sysext.service... Jan 17 12:21:22.260185 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:21:22.279146 systemd[1]: Reloading requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:21:22.279229 systemd[1]: Reloading... Jan 17 12:21:22.309275 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:21:22.309992 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:21:22.311833 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:21:22.312440 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 17 12:21:22.312577 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 17 12:21:22.319830 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:21:22.319858 systemd-tmpfiles[1357]: Skipping /boot Jan 17 12:21:22.345008 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:21:22.345029 systemd-tmpfiles[1357]: Skipping /boot Jan 17 12:21:22.423961 zram_generator::config[1387]: No configuration found. Jan 17 12:21:22.564996 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:22.650731 systemd[1]: Reloading finished in 370 ms. Jan 17 12:21:22.679837 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:22.703427 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:22.723439 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:21:22.743695 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:21:22.763099 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:21:22.786350 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:21:22.791743 augenrules[1452]: No rules Jan 17 12:21:22.805345 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:22.828751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:22.830146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:22.841234 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:22.859816 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:22.881887 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:22.892235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:22.892508 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:22.895880 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:21:22.909352 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:21:22.922101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:22.922414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:22.935052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:22.935332 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:22.950209 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:21:22.957508 systemd-resolved[1448]: Positive Trust Anchors: Jan 17 12:21:22.957528 systemd-resolved[1448]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:21:22.957584 systemd-resolved[1448]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:21:22.962194 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:22.962498 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:22.964988 systemd-resolved[1448]: Defaulting to hostname 'linux'. Jan 17 12:21:22.973738 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:21:22.992756 systemd[1]: Reached target network.target - Network. Jan 17 12:21:23.001332 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:21:23.011318 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:23.023290 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:23.023714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:23.029421 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:23.053357 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:23.075576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:23.086367 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:23.096375 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:21:23.106115 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:21:23.106505 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:23.110789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:23.111112 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:23.123076 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:23.123365 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:23.136792 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:23.137093 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:23.147903 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:21:23.166409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:23.166829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:23.174246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:23.198180 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:21:23.217180 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:23.237209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:23.256236 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 12:21:23.265337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:23.265456 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:21:23.275151 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:21:23.275208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:23.276713 systemd[1]: Finished ensure-sysext.service. Jan 17 12:21:23.285647 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:23.285977 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:23.297604 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:21:23.297888 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:21:23.308612 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:23.308898 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:23.320718 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:23.321039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:23.358222 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 12:21:23.376230 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 17 12:21:23.376562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:23.376647 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:21:23.396351 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:21:23.408166 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:21:23.420409 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:21:23.430380 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:21:23.442207 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:21:23.453173 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:21:23.453321 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:21:23.462152 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:21:23.472974 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:21:23.485112 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:21:23.494438 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:23.495492 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 17 12:21:23.507444 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:21:23.524460 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:21:23.534100 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:21:23.544119 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:21:23.553387 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:21:23.553476 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:21:23.553523 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:21:23.559071 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:21:23.582176 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:21:23.599093 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:21:23.618057 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:21:23.645188 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:21:23.653106 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:21:23.665402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:23.683727 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:21:23.690179 jq[1529]: false Jan 17 12:21:23.695962 coreos-metadata[1526]: Jan 17 12:21:23.695 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 17 12:21:23.705958 coreos-metadata[1526]: Jan 17 12:21:23.701 INFO Fetch successful Jan 17 12:21:23.705958 coreos-metadata[1526]: Jan 17 12:21:23.701 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 17 12:21:23.707592 coreos-metadata[1526]: Jan 17 12:21:23.706 INFO Fetch successful Jan 17 12:21:23.707592 coreos-metadata[1526]: Jan 17 12:21:23.706 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 17 12:21:23.707040 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 12:21:23.711943 coreos-metadata[1526]: Jan 17 12:21:23.709 INFO Fetch successful Jan 17 12:21:23.711943 coreos-metadata[1526]: Jan 17 12:21:23.709 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 17 12:21:23.711943 coreos-metadata[1526]: Jan 17 12:21:23.710 INFO Fetch successful Jan 17 12:21:23.729182 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:21:23.732872 extend-filesystems[1531]: Found loop4 Jan 17 12:21:23.732872 extend-filesystems[1531]: Found loop5 Jan 17 12:21:23.732872 extend-filesystems[1531]: Found loop6 Jan 17 12:21:23.732872 extend-filesystems[1531]: Found loop7 Jan 17 12:21:23.732872 extend-filesystems[1531]: Found sda Jan 17 12:21:23.732872 extend-filesystems[1531]: Found sda1 Jan 17 12:21:23.732872 extend-filesystems[1531]: Found sda2 Jan 17 12:21:23.732872 extend-filesystems[1531]: Found sda3 Jan 17 12:21:23.732872 extend-filesystems[1531]: Found usr Jan 17 12:21:23.732872 extend-filesystems[1531]: Found sda4 Jan 17 12:21:23.732872 extend-filesystems[1531]: Found sda6 Jan 17 12:21:23.732872 extend-filesystems[1531]: Found sda7 Jan 17 12:21:23.732872 extend-filesystems[1531]: Found sda9 Jan 17 12:21:23.732872 extend-filesystems[1531]: Checking size of /dev/sda9 Jan 17 12:21:23.853468 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: ---------------------------------------------------- Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: corporation. Support and training for ntp-4 are Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: available at https://www.nwtime.org/support Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: ---------------------------------------------------- Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: proto: precision = 0.100 usec (-23) Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: basedate set to 2025-01-05 Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: gps base set to 2025-01-05 (week 2348) Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: Listen normally on 3 eth0 10.128.0.73:123 Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: Listen normally on 4 lo [::1]:123 Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:49%2]:123 Jan 17 12:21:23.853533 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: Listening on routing socket on fd #22 for interface updates Jan 17 12:21:23.749094 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 17 12:21:23.805210 dbus-daemon[1528]: [system] SELinux support is enabled Jan 17 12:21:23.861545 extend-filesystems[1531]: Resized partition /dev/sda9 Jan 17 12:21:23.885105 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 17 12:21:23.783078 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:21:23.902827 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:21:23.902827 ntpd[1538]: 17 Jan 12:21:23 ntpd[1538]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:21:23.818285 ntpd[1538]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:21:23.906266 extend-filesystems[1556]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:21:23.846898 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:21:23.818321 ntpd[1538]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:21:23.930655 extend-filesystems[1556]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 12:21:23.930655 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 12:21:23.930655 extend-filesystems[1556]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 17 12:21:23.880178 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:21:23.818337 ntpd[1538]: ---------------------------------------------------- Jan 17 12:21:23.951648 init.sh[1548]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 17 12:21:23.951648 init.sh[1548]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 17 12:21:23.951648 init.sh[1548]: + /usr/bin/google_instance_setup Jan 17 12:21:23.952364 extend-filesystems[1531]: Resized filesystem in /dev/sda9 Jan 17 12:21:23.905824 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:21:23.818353 ntpd[1538]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:21:23.918983 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 17 12:21:23.818367 ntpd[1538]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:21:23.938107 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:21:23.818382 ntpd[1538]: corporation. Support and training for ntp-4 are Jan 17 12:21:23.818397 ntpd[1538]: available at https://www.nwtime.org/support Jan 17 12:21:23.818411 ntpd[1538]: ---------------------------------------------------- Jan 17 12:21:23.821502 ntpd[1538]: proto: precision = 0.100 usec (-23) Jan 17 12:21:23.822165 dbus-daemon[1528]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1217 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:21:23.823377 ntpd[1538]: basedate set to 2025-01-05 Jan 17 12:21:23.823404 ntpd[1538]: gps base set to 2025-01-05 (week 2348) Jan 17 12:21:23.833830 ntpd[1538]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:21:23.833906 ntpd[1538]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:21:23.835955 ntpd[1538]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:21:23.836067 ntpd[1538]: Listen normally on 3 eth0 10.128.0.73:123 Jan 17 12:21:23.837192 ntpd[1538]: Listen normally on 4 lo [::1]:123 Jan 17 12:21:23.837603 ntpd[1538]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:49%2]:123 Jan 17 12:21:23.837718 ntpd[1538]: Listening on routing socket on fd #22 for interface updates Jan 17 12:21:23.859291 ntpd[1538]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:21:23.859337 ntpd[1538]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:21:23.957177 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:21:23.971150 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1576) Jan 17 12:21:24.007135 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:21:24.050547 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:21:24.051107 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:21:24.053066 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:21:24.053502 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:21:24.063470 jq[1582]: true Jan 17 12:21:24.092667 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:21:24.093110 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:21:24.104111 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:21:24.121569 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:21:24.125210 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:21:24.132362 update_engine[1575]: I20250117 12:21:24.132238 1575 main.cc:92] Flatcar Update Engine starting Jan 17 12:21:24.149869 update_engine[1575]: I20250117 12:21:24.147280 1575 update_check_scheduler.cc:74] Next update check in 5m46s Jan 17 12:21:24.194935 jq[1593]: true Jan 17 12:21:24.202560 (ntainerd)[1594]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:21:24.218869 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:21:24.260657 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:21:24.309802 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:21:24.325555 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:21:24.325770 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:21:24.325815 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:21:24.329196 tar[1592]: linux-amd64/helm Jan 17 12:21:24.329110 systemd-logind[1570]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:21:24.329146 systemd-logind[1570]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 17 12:21:24.329178 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:21:24.334642 systemd-logind[1570]: New seat seat0. Jan 17 12:21:24.355198 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:21:24.365152 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:21:24.365196 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:21:24.382605 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:21:24.397148 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:21:24.408470 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:21:24.540975 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:21:24.544628 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:21:24.570319 systemd[1]: Starting sshkeys.service... Jan 17 12:21:24.601551 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:21:24.632183 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:21:24.724799 coreos-metadata[1639]: Jan 17 12:21:24.724 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 17 12:21:24.724799 coreos-metadata[1639]: Jan 17 12:21:24.724 INFO Fetch failed with 404: resource not found Jan 17 12:21:24.724799 coreos-metadata[1639]: Jan 17 12:21:24.724 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 17 12:21:24.724799 coreos-metadata[1639]: Jan 17 12:21:24.724 INFO Fetch successful Jan 17 12:21:24.724799 coreos-metadata[1639]: Jan 17 12:21:24.724 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 17 12:21:24.724799 coreos-metadata[1639]: Jan 17 12:21:24.724 INFO Fetch failed with 404: resource not found Jan 17 12:21:24.724799 coreos-metadata[1639]: Jan 17 12:21:24.724 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 17 12:21:24.724799 coreos-metadata[1639]: Jan 17 12:21:24.724 INFO Fetch failed with 404: resource not found Jan 17 12:21:24.724799 coreos-metadata[1639]: Jan 17 12:21:24.724 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 17 12:21:24.724799 coreos-metadata[1639]: Jan 17 12:21:24.724 INFO Fetch successful Jan 17 12:21:24.733981 unknown[1639]: wrote ssh authorized keys file for user: core Jan 17 12:21:24.818336 update-ssh-keys[1649]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:21:24.813361 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:21:24.839974 systemd[1]: Finished sshkeys.service. Jan 17 12:21:24.872705 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:21:24.925098 sshd_keygen[1584]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:21:24.924375 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:21:24.926442 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:21:24.927011 dbus-daemon[1528]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1615 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:21:24.946058 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:21:25.052638 polkitd[1661]: Started polkitd version 121 Jan 17 12:21:25.070706 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:21:25.089387 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:21:25.107772 polkitd[1661]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:21:25.107888 polkitd[1661]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:21:25.115443 polkitd[1661]: Finished loading, compiling and executing 2 rules Jan 17 12:21:25.122143 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:21:25.123441 polkitd[1661]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:21:25.123866 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:21:25.148516 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:21:25.148946 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:21:25.166343 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:21:25.183431 containerd[1594]: time="2025-01-17T12:21:25.183325589Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:21:25.223140 systemd-hostnamed[1615]: Hostname set to (transient) Jan 17 12:21:25.223971 systemd-resolved[1448]: System hostname changed to 'ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal'. Jan 17 12:21:25.247892 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:21:25.269661 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:21:25.291727 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:21:25.301983 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:21:25.326899 containerd[1594]: time="2025-01-17T12:21:25.326741118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:25.338759 containerd[1594]: time="2025-01-17T12:21:25.338690615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:25.340522 containerd[1594]: time="2025-01-17T12:21:25.339979120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:21:25.340522 containerd[1594]: time="2025-01-17T12:21:25.340042568Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:21:25.340522 containerd[1594]: time="2025-01-17T12:21:25.340271665Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:21:25.340522 containerd[1594]: time="2025-01-17T12:21:25.340303541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:25.340522 containerd[1594]: time="2025-01-17T12:21:25.340396820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:25.340522 containerd[1594]: time="2025-01-17T12:21:25.340422402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:25.342534 containerd[1594]: time="2025-01-17T12:21:25.341882245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:25.342534 containerd[1594]: time="2025-01-17T12:21:25.341950789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:25.342534 containerd[1594]: time="2025-01-17T12:21:25.341988600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:25.342534 containerd[1594]: time="2025-01-17T12:21:25.342008105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:25.342534 containerd[1594]: time="2025-01-17T12:21:25.342161286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:25.342534 containerd[1594]: time="2025-01-17T12:21:25.342481442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:25.345769 containerd[1594]: time="2025-01-17T12:21:25.344218525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:25.345769 containerd[1594]: time="2025-01-17T12:21:25.344257722Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:21:25.345769 containerd[1594]: time="2025-01-17T12:21:25.344395496Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:21:25.345769 containerd[1594]: time="2025-01-17T12:21:25.344485152Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:21:25.363548 containerd[1594]: time="2025-01-17T12:21:25.362150497Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:21:25.363548 containerd[1594]: time="2025-01-17T12:21:25.362282028Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:21:25.363548 containerd[1594]: time="2025-01-17T12:21:25.362375401Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:21:25.363548 containerd[1594]: time="2025-01-17T12:21:25.362406913Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:21:25.363548 containerd[1594]: time="2025-01-17T12:21:25.362432369Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:21:25.363548 containerd[1594]: time="2025-01-17T12:21:25.362743268Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:21:25.365966 containerd[1594]: time="2025-01-17T12:21:25.365230890Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366377459Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366419565Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366444075Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366468521Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366494062Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366516268Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366539698Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366569097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366591508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366612817Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366634859Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366696583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366724421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.366939 containerd[1594]: time="2025-01-17T12:21:25.366754832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.367588 containerd[1594]: time="2025-01-17T12:21:25.366778865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.367588 containerd[1594]: time="2025-01-17T12:21:25.366800178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.367588 containerd[1594]: time="2025-01-17T12:21:25.366822421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.367588 containerd[1594]: time="2025-01-17T12:21:25.366841997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.367588 containerd[1594]: time="2025-01-17T12:21:25.366879248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.367588 containerd[1594]: time="2025-01-17T12:21:25.366902582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.372193 containerd[1594]: time="2025-01-17T12:21:25.369989181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.372193 containerd[1594]: time="2025-01-17T12:21:25.370047878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.372193 containerd[1594]: time="2025-01-17T12:21:25.370075745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.372193 containerd[1594]: time="2025-01-17T12:21:25.370123814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.372193 containerd[1594]: time="2025-01-17T12:21:25.370162792Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:21:25.372193 containerd[1594]: time="2025-01-17T12:21:25.370223948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.372193 containerd[1594]: time="2025-01-17T12:21:25.370247807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.372193 containerd[1594]: time="2025-01-17T12:21:25.370283821Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:21:25.372193 containerd[1594]: time="2025-01-17T12:21:25.372143322Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:21:25.373293 containerd[1594]: time="2025-01-17T12:21:25.372990467Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:21:25.373293 containerd[1594]: time="2025-01-17T12:21:25.373023876Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:21:25.373293 containerd[1594]: time="2025-01-17T12:21:25.373108442Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:21:25.374131 containerd[1594]: time="2025-01-17T12:21:25.373149583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.374131 containerd[1594]: time="2025-01-17T12:21:25.373533821Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:21:25.374131 containerd[1594]: time="2025-01-17T12:21:25.374056284Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:21:25.374131 containerd[1594]: time="2025-01-17T12:21:25.374096788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:21:25.377679 containerd[1594]: time="2025-01-17T12:21:25.377333586Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:21:25.377679 containerd[1594]: time="2025-01-17T12:21:25.377583097Z" level=info msg="Connect containerd service" Jan 17 12:21:25.380138 containerd[1594]: time="2025-01-17T12:21:25.380106077Z" level=info msg="using legacy CRI server" Jan 17 12:21:25.380817 containerd[1594]: time="2025-01-17T12:21:25.380254970Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:21:25.380817 containerd[1594]: time="2025-01-17T12:21:25.380509346Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:21:25.386421 containerd[1594]: time="2025-01-17T12:21:25.385879325Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:21:25.386421 containerd[1594]: time="2025-01-17T12:21:25.386048964Z" level=info msg="Start subscribing containerd event" Jan 17 12:21:25.386421 containerd[1594]: time="2025-01-17T12:21:25.386121626Z" level=info msg="Start recovering state" Jan 17 12:21:25.386421 containerd[1594]: time="2025-01-17T12:21:25.386227988Z" level=info msg="Start event monitor" Jan 17 12:21:25.386421 containerd[1594]: time="2025-01-17T12:21:25.386244515Z" level=info msg="Start snapshots syncer" Jan 17 12:21:25.386421 containerd[1594]: time="2025-01-17T12:21:25.386259310Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:21:25.386421 containerd[1594]: time="2025-01-17T12:21:25.386271837Z" level=info msg="Start streaming server" Jan 17 12:21:25.388674 containerd[1594]: time="2025-01-17T12:21:25.388139717Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:21:25.388674 containerd[1594]: time="2025-01-17T12:21:25.388238898Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:21:25.388492 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:21:25.390446 containerd[1594]: time="2025-01-17T12:21:25.390082728Z" level=info msg="containerd successfully booted in 0.211596s" Jan 17 12:21:25.580619 instance-setup[1562]: INFO Running google_set_multiqueue. Jan 17 12:21:25.619590 instance-setup[1562]: INFO Set channels for eth0 to 2. Jan 17 12:21:25.631243 instance-setup[1562]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 17 12:21:25.635076 instance-setup[1562]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 17 12:21:25.635760 instance-setup[1562]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 17 12:21:25.639065 instance-setup[1562]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 17 12:21:25.639609 instance-setup[1562]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 17 12:21:25.642471 instance-setup[1562]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 17 12:21:25.643385 instance-setup[1562]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 17 12:21:25.648143 instance-setup[1562]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 17 12:21:25.659120 instance-setup[1562]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 12:21:25.665253 instance-setup[1562]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 12:21:25.667025 instance-setup[1562]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 17 12:21:25.667093 instance-setup[1562]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 17 12:21:25.695492 init.sh[1548]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 17 12:21:25.824588 tar[1592]: linux-amd64/LICENSE Jan 17 12:21:25.824588 tar[1592]: linux-amd64/README.md Jan 17 12:21:25.860058 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:21:25.938651 startup-script[1720]: INFO Starting startup scripts. Jan 17 12:21:25.945499 startup-script[1720]: INFO No startup scripts found in metadata. Jan 17 12:21:25.945579 startup-script[1720]: INFO Finished running startup scripts. Jan 17 12:21:25.970979 init.sh[1548]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 17 12:21:25.970979 init.sh[1548]: + daemon_pids=() Jan 17 12:21:25.971187 init.sh[1548]: + for d in accounts clock_skew network Jan 17 12:21:25.973126 init.sh[1548]: + daemon_pids+=($!) Jan 17 12:21:25.973126 init.sh[1548]: + for d in accounts clock_skew network Jan 17 12:21:25.973126 init.sh[1548]: + daemon_pids+=($!) Jan 17 12:21:25.973126 init.sh[1548]: + for d in accounts clock_skew network Jan 17 12:21:25.973126 init.sh[1548]: + daemon_pids+=($!) Jan 17 12:21:25.973126 init.sh[1548]: + NOTIFY_SOCKET=/run/systemd/notify Jan 17 12:21:25.973126 init.sh[1548]: + /usr/bin/systemd-notify --ready Jan 17 12:21:25.973500 init.sh[1728]: + /usr/bin/google_accounts_daemon Jan 17 12:21:25.973983 init.sh[1729]: + /usr/bin/google_clock_skew_daemon Jan 17 12:21:25.974306 init.sh[1730]: + /usr/bin/google_network_daemon Jan 17 12:21:25.996404 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 17 12:21:26.012456 init.sh[1548]: + wait -n 1728 1729 1730 Jan 17 12:21:26.332763 groupadd[1735]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 17 12:21:26.337023 groupadd[1735]: group added to /etc/gshadow: name=google-sudoers Jan 17 12:21:26.394846 google-networking[1730]: INFO Starting Google Networking daemon. Jan 17 12:21:26.396174 google-clock-skew[1729]: INFO Starting Google Clock Skew daemon. Jan 17 12:21:26.408334 google-clock-skew[1729]: INFO Clock drift token has changed: 0. Jan 17 12:21:26.427091 groupadd[1735]: new group: name=google-sudoers, GID=1000 Jan 17 12:21:26.459375 google-accounts[1728]: INFO Starting Google Accounts daemon. Jan 17 12:21:26.473800 google-accounts[1728]: WARNING OS Login not installed. Jan 17 12:21:26.475904 google-accounts[1728]: INFO Creating a new user account for 0. Jan 17 12:21:26.480314 init.sh[1748]: useradd: invalid user name '0': use --badname to ignore Jan 17 12:21:26.480699 google-accounts[1728]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 17 12:21:27.000492 systemd-resolved[1448]: Clock change detected. Flushing caches. Jan 17 12:21:27.000988 google-clock-skew[1729]: INFO Synced system time with hardware clock. Jan 17 12:21:27.035010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:27.046890 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:21:27.051608 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:21:27.058185 systemd[1]: Startup finished in 13.218s (kernel) + 9.825s (userspace) = 23.044s. Jan 17 12:21:28.149177 kubelet[1758]: E0117 12:21:28.149067 1758 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:21:28.152327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:21:28.152785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:21:32.244639 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:21:32.257506 systemd[1]: Started sshd@0-10.128.0.73:22-139.178.89.65:49090.service - OpenSSH per-connection server daemon (139.178.89.65:49090). Jan 17 12:21:32.542722 sshd[1771]: Accepted publickey for core from 139.178.89.65 port 49090 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:32.544839 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:32.557045 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:21:32.569521 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:21:32.573243 systemd-logind[1570]: New session 1 of user core. Jan 17 12:21:32.591444 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:21:32.600630 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:21:32.624642 (systemd)[1777]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:21:32.756228 systemd[1777]: Queued start job for default target default.target. Jan 17 12:21:32.756861 systemd[1777]: Created slice app.slice - User Application Slice. Jan 17 12:21:32.756903 systemd[1777]: Reached target paths.target - Paths. Jan 17 12:21:32.756925 systemd[1777]: Reached target timers.target - Timers. Jan 17 12:21:32.761937 systemd[1777]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:21:32.783114 systemd[1777]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:21:32.783209 systemd[1777]: Reached target sockets.target - Sockets. Jan 17 12:21:32.783233 systemd[1777]: Reached target basic.target - Basic System. Jan 17 12:21:32.783307 systemd[1777]: Reached target default.target - Main User Target. Jan 17 12:21:32.783360 systemd[1777]: Startup finished in 149ms. Jan 17 12:21:32.783931 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:21:32.789269 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:21:33.016964 systemd[1]: Started sshd@1-10.128.0.73:22-139.178.89.65:49102.service - OpenSSH per-connection server daemon (139.178.89.65:49102). Jan 17 12:21:33.300489 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 49102 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:33.302352 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:33.308761 systemd-logind[1570]: New session 2 of user core. Jan 17 12:21:33.318789 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:21:33.514047 sshd[1789]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:33.518847 systemd[1]: sshd@1-10.128.0.73:22-139.178.89.65:49102.service: Deactivated successfully. Jan 17 12:21:33.525214 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:21:33.525431 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:21:33.527202 systemd-logind[1570]: Removed session 2. Jan 17 12:21:33.569243 systemd[1]: Started sshd@2-10.128.0.73:22-139.178.89.65:49108.service - OpenSSH per-connection server daemon (139.178.89.65:49108). Jan 17 12:21:33.858666 sshd[1797]: Accepted publickey for core from 139.178.89.65 port 49108 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:33.860529 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:33.866762 systemd-logind[1570]: New session 3 of user core. Jan 17 12:21:33.873207 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:21:34.073157 sshd[1797]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:34.077599 systemd[1]: sshd@2-10.128.0.73:22-139.178.89.65:49108.service: Deactivated successfully. Jan 17 12:21:34.082117 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:21:34.083486 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:21:34.086065 systemd-logind[1570]: Removed session 3. Jan 17 12:21:34.124082 systemd[1]: Started sshd@3-10.128.0.73:22-139.178.89.65:49118.service - OpenSSH per-connection server daemon (139.178.89.65:49118). Jan 17 12:21:34.404650 sshd[1805]: Accepted publickey for core from 139.178.89.65 port 49118 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:34.406341 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:34.412652 systemd-logind[1570]: New session 4 of user core. Jan 17 12:21:34.418141 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:21:34.619048 sshd[1805]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:34.623544 systemd[1]: sshd@3-10.128.0.73:22-139.178.89.65:49118.service: Deactivated successfully. Jan 17 12:21:34.629349 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:21:34.630162 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:21:34.631940 systemd-logind[1570]: Removed session 4. Jan 17 12:21:34.666675 systemd[1]: Started sshd@4-10.128.0.73:22-139.178.89.65:49134.service - OpenSSH per-connection server daemon (139.178.89.65:49134). Jan 17 12:21:34.956501 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 49134 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:34.958426 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:34.964911 systemd-logind[1570]: New session 5 of user core. Jan 17 12:21:34.975222 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:21:35.147931 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:21:35.148419 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:35.164672 sudo[1817]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:35.207733 sshd[1813]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:35.214334 systemd[1]: sshd@4-10.128.0.73:22-139.178.89.65:49134.service: Deactivated successfully. Jan 17 12:21:35.219166 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:21:35.220548 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:21:35.221849 systemd-logind[1570]: Removed session 5. Jan 17 12:21:35.263261 systemd[1]: Started sshd@5-10.128.0.73:22-139.178.89.65:49148.service - OpenSSH per-connection server daemon (139.178.89.65:49148). Jan 17 12:21:35.546727 sshd[1822]: Accepted publickey for core from 139.178.89.65 port 49148 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:35.549103 sshd[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:35.555604 systemd-logind[1570]: New session 6 of user core. Jan 17 12:21:35.566186 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:21:35.726644 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:21:35.727175 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:35.732202 sudo[1827]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:35.746105 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:21:35.746606 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:35.764303 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:35.767881 auditctl[1830]: No rules Jan 17 12:21:35.768768 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:21:35.769196 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:35.781033 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:35.812901 augenrules[1849]: No rules Jan 17 12:21:35.814495 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:35.817499 sudo[1826]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:35.863718 sshd[1822]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:35.869276 systemd[1]: sshd@5-10.128.0.73:22-139.178.89.65:49148.service: Deactivated successfully. Jan 17 12:21:35.874269 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:21:35.875118 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:21:35.876576 systemd-logind[1570]: Removed session 6. Jan 17 12:21:35.911484 systemd[1]: Started sshd@6-10.128.0.73:22-139.178.89.65:49160.service - OpenSSH per-connection server daemon (139.178.89.65:49160). Jan 17 12:21:36.206675 sshd[1858]: Accepted publickey for core from 139.178.89.65 port 49160 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:36.208891 sshd[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:36.215111 systemd-logind[1570]: New session 7 of user core. Jan 17 12:21:36.224090 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:21:36.386560 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:21:36.387086 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:36.830393 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:21:36.842481 (dockerd)[1878]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:21:37.292273 dockerd[1878]: time="2025-01-17T12:21:37.292053671Z" level=info msg="Starting up" Jan 17 12:21:37.417718 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2522147966-merged.mount: Deactivated successfully. Jan 17 12:21:37.913103 dockerd[1878]: time="2025-01-17T12:21:37.913035473Z" level=info msg="Loading containers: start." Jan 17 12:21:38.056818 kernel: Initializing XFRM netlink socket Jan 17 12:21:38.164425 systemd-networkd[1217]: docker0: Link UP Jan 17 12:21:38.187027 dockerd[1878]: time="2025-01-17T12:21:38.186964842Z" level=info msg="Loading containers: done." Jan 17 12:21:38.209590 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3405282903-merged.mount: Deactivated successfully. Jan 17 12:21:38.211676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:21:38.213089 dockerd[1878]: time="2025-01-17T12:21:38.211670149Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:21:38.213089 dockerd[1878]: time="2025-01-17T12:21:38.211840719Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:21:38.213089 dockerd[1878]: time="2025-01-17T12:21:38.211993450Z" level=info msg="Daemon has completed initialization" Jan 17 12:21:38.218382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:38.284913 dockerd[1878]: time="2025-01-17T12:21:38.284097857Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:21:38.284553 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:21:38.495096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:38.499566 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:21:38.593951 kubelet[2026]: E0117 12:21:38.593876 2026 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:21:38.599470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:21:38.600037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:21:39.373384 containerd[1594]: time="2025-01-17T12:21:39.373298983Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:21:39.860286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284488041.mount: Deactivated successfully. Jan 17 12:21:41.600262 containerd[1594]: time="2025-01-17T12:21:41.600180881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:41.601970 containerd[1594]: time="2025-01-17T12:21:41.601904797Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35147358" Jan 17 12:21:41.603299 containerd[1594]: time="2025-01-17T12:21:41.603212433Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:41.610143 containerd[1594]: time="2025-01-17T12:21:41.610085173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:41.614394 containerd[1594]: time="2025-01-17T12:21:41.613171209Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 2.239774433s" Jan 17 12:21:41.614394 containerd[1594]: time="2025-01-17T12:21:41.614135641Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:21:41.645855 containerd[1594]: time="2025-01-17T12:21:41.645811077Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:21:43.309033 containerd[1594]: time="2025-01-17T12:21:43.308954728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:43.310688 containerd[1594]: time="2025-01-17T12:21:43.310621741Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32218575" Jan 17 12:21:43.311879 containerd[1594]: time="2025-01-17T12:21:43.311797090Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:43.315512 containerd[1594]: time="2025-01-17T12:21:43.315435851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:43.317424 containerd[1594]: time="2025-01-17T12:21:43.317188479Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 1.671277216s" Jan 17 12:21:43.317424 containerd[1594]: time="2025-01-17T12:21:43.317242726Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:21:43.348908 containerd[1594]: time="2025-01-17T12:21:43.348856185Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:21:44.431133 containerd[1594]: time="2025-01-17T12:21:44.431063116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:44.432695 containerd[1594]: time="2025-01-17T12:21:44.432622257Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17334757" Jan 17 12:21:44.434403 containerd[1594]: time="2025-01-17T12:21:44.434326768Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:44.442803 containerd[1594]: time="2025-01-17T12:21:44.440517887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:44.444742 containerd[1594]: time="2025-01-17T12:21:44.444681254Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.095545661s" Jan 17 12:21:44.445010 containerd[1594]: time="2025-01-17T12:21:44.444976124Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:21:44.477972 containerd[1594]: time="2025-01-17T12:21:44.477925587Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:21:45.602020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount252827588.mount: Deactivated successfully. Jan 17 12:21:46.162448 containerd[1594]: time="2025-01-17T12:21:46.162372903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:46.164132 containerd[1594]: time="2025-01-17T12:21:46.164047319Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28622836" Jan 17 12:21:46.165892 containerd[1594]: time="2025-01-17T12:21:46.165830738Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:46.169284 containerd[1594]: time="2025-01-17T12:21:46.169198050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:46.170474 containerd[1594]: time="2025-01-17T12:21:46.170227037Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.69221604s" Jan 17 12:21:46.170474 containerd[1594]: time="2025-01-17T12:21:46.170281654Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:21:46.201766 containerd[1594]: time="2025-01-17T12:21:46.201699073Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:21:46.676118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642872269.mount: Deactivated successfully. Jan 17 12:21:47.750282 containerd[1594]: time="2025-01-17T12:21:47.750206854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:47.751968 containerd[1594]: time="2025-01-17T12:21:47.751895738Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 17 12:21:47.753706 containerd[1594]: time="2025-01-17T12:21:47.753628160Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:47.757696 containerd[1594]: time="2025-01-17T12:21:47.757587329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:47.759313 containerd[1594]: time="2025-01-17T12:21:47.759134211Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.557386229s" Jan 17 12:21:47.759313 containerd[1594]: time="2025-01-17T12:21:47.759186296Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:21:47.790059 containerd[1594]: time="2025-01-17T12:21:47.790016970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:21:48.152083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2802254434.mount: Deactivated successfully. Jan 17 12:21:48.159173 containerd[1594]: time="2025-01-17T12:21:48.159098852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:48.160475 containerd[1594]: time="2025-01-17T12:21:48.160397352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Jan 17 12:21:48.161549 containerd[1594]: time="2025-01-17T12:21:48.161505715Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:48.165170 containerd[1594]: time="2025-01-17T12:21:48.165128038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:48.166760 containerd[1594]: time="2025-01-17T12:21:48.166716549Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 375.99864ms" Jan 17 12:21:48.166906 containerd[1594]: time="2025-01-17T12:21:48.166766041Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:21:48.196436 containerd[1594]: time="2025-01-17T12:21:48.196370013Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:21:48.607311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount186818830.mount: Deactivated successfully. Jan 17 12:21:48.609389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:21:48.618505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:48.928714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:48.942292 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:21:49.047258 kubelet[2212]: E0117 12:21:49.047166 2212 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:21:49.053592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:21:49.054023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:21:50.926196 containerd[1594]: time="2025-01-17T12:21:50.926117697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:50.927973 containerd[1594]: time="2025-01-17T12:21:50.927899797Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56659115" Jan 17 12:21:50.929404 containerd[1594]: time="2025-01-17T12:21:50.929336550Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:50.933279 containerd[1594]: time="2025-01-17T12:21:50.933190012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:50.937635 containerd[1594]: time="2025-01-17T12:21:50.937581160Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.741158324s" Jan 17 12:21:50.937757 containerd[1594]: time="2025-01-17T12:21:50.937640169Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:21:55.158835 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:55.167179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:55.206201 systemd[1]: Reloading requested from client PID 2320 ('systemctl') (unit session-7.scope)... Jan 17 12:21:55.206235 systemd[1]: Reloading... Jan 17 12:21:55.338055 zram_generator::config[2361]: No configuration found. Jan 17 12:21:55.516045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:55.615627 systemd[1]: Reloading finished in 408 ms. Jan 17 12:21:55.642191 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:21:55.678622 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:21:55.679031 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:21:55.679671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:55.691224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:55.993139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:55.995145 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:21:56.064579 kubelet[2428]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:21:56.064579 kubelet[2428]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:21:56.065201 kubelet[2428]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:21:56.065201 kubelet[2428]: I0117 12:21:56.064683 2428 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:21:57.553212 kubelet[2428]: I0117 12:21:57.553162 2428 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:21:57.553212 kubelet[2428]: I0117 12:21:57.553202 2428 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:21:57.553875 kubelet[2428]: I0117 12:21:57.553536 2428 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:21:57.582809 kubelet[2428]: E0117 12:21:57.580593 2428 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:57.583500 kubelet[2428]: I0117 12:21:57.583464 2428 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:21:57.599704 kubelet[2428]: I0117 12:21:57.599623 2428 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:21:57.605223 kubelet[2428]: I0117 12:21:57.605160 2428 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:21:57.605537 kubelet[2428]: I0117 12:21:57.605492 2428 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:21:57.606464 kubelet[2428]: I0117 12:21:57.606416 2428 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:21:57.606464 kubelet[2428]: I0117 12:21:57.606456 2428 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:21:57.606663 kubelet[2428]: I0117 12:21:57.606623 2428 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:21:57.606904 kubelet[2428]: I0117 12:21:57.606878 2428 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:21:57.606998 kubelet[2428]: I0117 12:21:57.606914 2428 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:21:57.606998 kubelet[2428]: I0117 12:21:57.606962 2428 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:21:57.606998 kubelet[2428]: I0117 12:21:57.606992 2428 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:21:57.609804 kubelet[2428]: W0117 12:21:57.609713 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:57.610483 kubelet[2428]: E0117 12:21:57.609962 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:57.610483 kubelet[2428]: W0117 12:21:57.610400 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:57.610483 kubelet[2428]: E0117 12:21:57.610455 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:57.611594 kubelet[2428]: I0117 12:21:57.611215 2428 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:21:57.615925 kubelet[2428]: I0117 12:21:57.615871 2428 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:21:57.618178 kubelet[2428]: W0117 12:21:57.618119 2428 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:21:57.619008 kubelet[2428]: I0117 12:21:57.618978 2428 server.go:1256] "Started kubelet" Jan 17 12:21:57.619397 kubelet[2428]: I0117 12:21:57.619349 2428 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:21:57.621069 kubelet[2428]: I0117 12:21:57.620455 2428 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:21:57.623635 kubelet[2428]: I0117 12:21:57.623578 2428 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:21:57.627601 kubelet[2428]: I0117 12:21:57.626943 2428 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:21:57.627601 kubelet[2428]: I0117 12:21:57.627229 2428 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:21:57.630287 kubelet[2428]: E0117 12:21:57.630051 2428 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal.181b7a44e623c6bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal,UID:ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal,},FirstTimestamp:2025-01-17 12:21:57.618927293 +0000 UTC m=+1.617058781,LastTimestamp:2025-01-17 12:21:57.618927293 +0000 UTC m=+1.617058781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal,}" Jan 17 12:21:57.635853 kubelet[2428]: I0117 12:21:57.635826 2428 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:21:57.636987 kubelet[2428]: I0117 12:21:57.636137 2428 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:21:57.636987 kubelet[2428]: I0117 12:21:57.636233 2428 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:21:57.637303 kubelet[2428]: W0117 12:21:57.637250 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:57.637427 kubelet[2428]: E0117 12:21:57.637411 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:57.637660 kubelet[2428]: E0117 12:21:57.637640 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.73:6443: connect: connection refused" interval="200ms" Jan 17 12:21:57.638159 kubelet[2428]: I0117 12:21:57.638136 2428 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:21:57.638404 kubelet[2428]: I0117 12:21:57.638381 2428 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:21:57.640842 kubelet[2428]: I0117 12:21:57.640821 2428 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:21:57.658426 kubelet[2428]: I0117 12:21:57.658372 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:21:57.659956 kubelet[2428]: I0117 12:21:57.659905 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:21:57.659956 kubelet[2428]: I0117 12:21:57.659944 2428 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:21:57.660120 kubelet[2428]: I0117 12:21:57.659971 2428 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:21:57.660120 kubelet[2428]: E0117 12:21:57.660042 2428 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:21:57.669085 kubelet[2428]: E0117 12:21:57.669049 2428 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:21:57.683129 kubelet[2428]: W0117 12:21:57.682971 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:57.683129 kubelet[2428]: E0117 12:21:57.683061 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:57.697257 kubelet[2428]: I0117 12:21:57.697224 2428 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:21:57.697257 kubelet[2428]: I0117 12:21:57.697255 2428 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:21:57.697483 kubelet[2428]: I0117 12:21:57.697280 2428 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:21:57.699741 kubelet[2428]: I0117 12:21:57.699685 2428 policy_none.go:49] "None policy: Start" Jan 17 12:21:57.700609 kubelet[2428]: I0117 12:21:57.700586 2428 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:21:57.700709 kubelet[2428]: I0117 12:21:57.700645 2428 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:21:57.706518 kubelet[2428]: I0117 12:21:57.706472 2428 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:21:57.706906 kubelet[2428]: I0117 12:21:57.706873 2428 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:21:57.712514 kubelet[2428]: E0117 12:21:57.712475 2428 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" not found" Jan 17 12:21:57.741753 kubelet[2428]: I0117 12:21:57.741723 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.742326 kubelet[2428]: E0117 12:21:57.742278 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.73:6443/api/v1/nodes\": dial tcp 10.128.0.73:6443: connect: connection refused" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.760576 kubelet[2428]: I0117 12:21:57.760530 2428 topology_manager.go:215] "Topology Admit Handler" podUID="c953ec82ab7698a51a44d5d1cd1a448d" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.766325 kubelet[2428]: I0117 12:21:57.766274 2428 topology_manager.go:215] "Topology Admit Handler" podUID="ba8d395adb481c3b1d33a93623eb9971" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.772627 kubelet[2428]: I0117 12:21:57.772281 2428 topology_manager.go:215] "Topology Admit Handler" podUID="03f209ae23a5cfbc352bdcaf86f8aac6" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.838361 kubelet[2428]: I0117 12:21:57.837681 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03f209ae23a5cfbc352bdcaf86f8aac6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"03f209ae23a5cfbc352bdcaf86f8aac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.838361 kubelet[2428]: I0117 12:21:57.837861 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c953ec82ab7698a51a44d5d1cd1a448d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"c953ec82ab7698a51a44d5d1cd1a448d\") " pod="kube-system/kube-scheduler-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.838361 kubelet[2428]: I0117 12:21:57.837917 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba8d395adb481c3b1d33a93623eb9971-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"ba8d395adb481c3b1d33a93623eb9971\") " pod="kube-system/kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.838361 kubelet[2428]: I0117 12:21:57.837957 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03f209ae23a5cfbc352bdcaf86f8aac6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"03f209ae23a5cfbc352bdcaf86f8aac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.839296 kubelet[2428]: I0117 12:21:57.837995 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03f209ae23a5cfbc352bdcaf86f8aac6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"03f209ae23a5cfbc352bdcaf86f8aac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.839296 kubelet[2428]: I0117 12:21:57.838030 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba8d395adb481c3b1d33a93623eb9971-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"ba8d395adb481c3b1d33a93623eb9971\") " pod="kube-system/kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.839296 kubelet[2428]: I0117 12:21:57.838079 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba8d395adb481c3b1d33a93623eb9971-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"ba8d395adb481c3b1d33a93623eb9971\") " pod="kube-system/kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.839296 kubelet[2428]: I0117 12:21:57.838125 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03f209ae23a5cfbc352bdcaf86f8aac6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"03f209ae23a5cfbc352bdcaf86f8aac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.839525 kubelet[2428]: I0117 12:21:57.838163 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03f209ae23a5cfbc352bdcaf86f8aac6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"03f209ae23a5cfbc352bdcaf86f8aac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.839525 kubelet[2428]: E0117 12:21:57.839126 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.73:6443: connect: connection refused" interval="400ms" Jan 17 12:21:57.947492 kubelet[2428]: I0117 12:21:57.947410 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:57.947957 kubelet[2428]: E0117 12:21:57.947913 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.73:6443/api/v1/nodes\": dial tcp 10.128.0.73:6443: connect: connection refused" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:58.076736 containerd[1594]: time="2025-01-17T12:21:58.076680728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal,Uid:c953ec82ab7698a51a44d5d1cd1a448d,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:58.082854 containerd[1594]: time="2025-01-17T12:21:58.082670378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal,Uid:ba8d395adb481c3b1d33a93623eb9971,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:58.087115 containerd[1594]: time="2025-01-17T12:21:58.086589168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal,Uid:03f209ae23a5cfbc352bdcaf86f8aac6,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:58.240142 kubelet[2428]: E0117 12:21:58.239995 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.73:6443: connect: connection refused" interval="800ms" Jan 17 12:21:58.354457 kubelet[2428]: I0117 12:21:58.354411 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:58.355061 kubelet[2428]: E0117 12:21:58.355017 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.73:6443/api/v1/nodes\": dial tcp 10.128.0.73:6443: connect: connection refused" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:58.443864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217351779.mount: Deactivated successfully. Jan 17 12:21:58.452155 containerd[1594]: time="2025-01-17T12:21:58.452096394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:58.453345 containerd[1594]: time="2025-01-17T12:21:58.453300209Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:58.454463 containerd[1594]: time="2025-01-17T12:21:58.454408076Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:58.455638 containerd[1594]: time="2025-01-17T12:21:58.455580575Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:21:58.456373 containerd[1594]: time="2025-01-17T12:21:58.456316729Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:21:58.457380 containerd[1594]: time="2025-01-17T12:21:58.457325777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 17 12:21:58.458419 containerd[1594]: time="2025-01-17T12:21:58.458356785Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:58.462723 containerd[1594]: time="2025-01-17T12:21:58.462684609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:58.464160 containerd[1594]: time="2025-01-17T12:21:58.463831296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 377.150033ms" Jan 17 12:21:58.470927 containerd[1594]: time="2025-01-17T12:21:58.470533827Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 387.769351ms" Jan 17 12:21:58.480716 containerd[1594]: time="2025-01-17T12:21:58.480621515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 403.805269ms" Jan 17 12:21:58.654597 kubelet[2428]: W0117 12:21:58.654370 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:58.654597 kubelet[2428]: E0117 12:21:58.654462 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:58.683598 containerd[1594]: time="2025-01-17T12:21:58.682174945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:58.683598 containerd[1594]: time="2025-01-17T12:21:58.682267964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:58.683598 containerd[1594]: time="2025-01-17T12:21:58.682305381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:58.683598 containerd[1594]: time="2025-01-17T12:21:58.682475799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:58.687169 containerd[1594]: time="2025-01-17T12:21:58.685903818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:58.687169 containerd[1594]: time="2025-01-17T12:21:58.686010688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:58.687169 containerd[1594]: time="2025-01-17T12:21:58.686038892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:58.687169 containerd[1594]: time="2025-01-17T12:21:58.686243421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:58.689136 containerd[1594]: time="2025-01-17T12:21:58.688663556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:58.689136 containerd[1594]: time="2025-01-17T12:21:58.688746213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:58.689136 containerd[1594]: time="2025-01-17T12:21:58.688796410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:58.689136 containerd[1594]: time="2025-01-17T12:21:58.688943019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:58.835313 containerd[1594]: time="2025-01-17T12:21:58.835216745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal,Uid:c953ec82ab7698a51a44d5d1cd1a448d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0867ee6bbc866af086c1d8130257a47bcffaa15c6293f317b9e0c1482c290376\"" Jan 17 12:21:58.839185 containerd[1594]: time="2025-01-17T12:21:58.839069922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal,Uid:ba8d395adb481c3b1d33a93623eb9971,Namespace:kube-system,Attempt:0,} returns sandbox id \"badab97ed13040e401279c4b7ab49e252e32c3b55a7c72f3c4b09f33e5dc9ea6\"" Jan 17 12:21:58.842708 kubelet[2428]: E0117 12:21:58.842465 2428 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-21291" Jan 17 12:21:58.843400 kubelet[2428]: E0117 12:21:58.843295 2428 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-21291" Jan 17 12:21:58.846803 containerd[1594]: time="2025-01-17T12:21:58.846609532Z" level=info msg="CreateContainer within sandbox \"badab97ed13040e401279c4b7ab49e252e32c3b55a7c72f3c4b09f33e5dc9ea6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:21:58.847497 containerd[1594]: time="2025-01-17T12:21:58.847460802Z" level=info msg="CreateContainer within sandbox \"0867ee6bbc866af086c1d8130257a47bcffaa15c6293f317b9e0c1482c290376\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:21:58.850820 containerd[1594]: time="2025-01-17T12:21:58.850746256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal,Uid:03f209ae23a5cfbc352bdcaf86f8aac6,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb613eb92700444f4a6c15f962c7671fdad1abf0dea7b504837bd2c1370db452\"" Jan 17 12:21:58.852456 kubelet[2428]: E0117 12:21:58.852343 2428 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flat" Jan 17 12:21:58.855756 containerd[1594]: time="2025-01-17T12:21:58.855714340Z" level=info msg="CreateContainer within sandbox \"cb613eb92700444f4a6c15f962c7671fdad1abf0dea7b504837bd2c1370db452\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:21:58.876634 containerd[1594]: time="2025-01-17T12:21:58.876511104Z" level=info msg="CreateContainer within sandbox \"badab97ed13040e401279c4b7ab49e252e32c3b55a7c72f3c4b09f33e5dc9ea6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8b536c155365648bbaf9be78be212ec78840abf7c4bc8638255b84cc0a83f1a4\"" Jan 17 12:21:58.877849 containerd[1594]: time="2025-01-17T12:21:58.877748260Z" level=info msg="StartContainer for \"8b536c155365648bbaf9be78be212ec78840abf7c4bc8638255b84cc0a83f1a4\"" Jan 17 12:21:58.880752 containerd[1594]: time="2025-01-17T12:21:58.880628177Z" level=info msg="CreateContainer within sandbox \"0867ee6bbc866af086c1d8130257a47bcffaa15c6293f317b9e0c1482c290376\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b4493deb6417a2f48dbbd58618d9058155965aa31b6825ad3bed0ae9d25e4b24\"" Jan 17 12:21:58.882803 containerd[1594]: time="2025-01-17T12:21:58.881876749Z" level=info msg="StartContainer for \"b4493deb6417a2f48dbbd58618d9058155965aa31b6825ad3bed0ae9d25e4b24\"" Jan 17 12:21:58.884727 containerd[1594]: time="2025-01-17T12:21:58.884587295Z" level=info msg="CreateContainer within sandbox \"cb613eb92700444f4a6c15f962c7671fdad1abf0dea7b504837bd2c1370db452\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6b37fed75dbc69763ef35c6c05702641b020d500a03a0e1ce802c67ca411d6d9\"" Jan 17 12:21:58.885700 containerd[1594]: time="2025-01-17T12:21:58.885670376Z" level=info msg="StartContainer for \"6b37fed75dbc69763ef35c6c05702641b020d500a03a0e1ce802c67ca411d6d9\"" Jan 17 12:21:58.950657 kubelet[2428]: W0117 12:21:58.947651 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:58.954279 kubelet[2428]: E0117 12:21:58.954038 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:58.989057 kubelet[2428]: W0117 12:21:58.988998 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:58.989254 kubelet[2428]: E0117 12:21:58.989078 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:58.992843 kubelet[2428]: W0117 12:21:58.991505 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:58.992843 kubelet[2428]: E0117 12:21:58.991587 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.73:6443: connect: connection refused Jan 17 12:21:59.041664 kubelet[2428]: E0117 12:21:59.041625 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.73:6443: connect: connection refused" interval="1.6s" Jan 17 12:21:59.046544 containerd[1594]: time="2025-01-17T12:21:59.046380912Z" level=info msg="StartContainer for \"6b37fed75dbc69763ef35c6c05702641b020d500a03a0e1ce802c67ca411d6d9\" returns successfully" Jan 17 12:21:59.064172 containerd[1594]: time="2025-01-17T12:21:59.063933663Z" level=info msg="StartContainer for \"8b536c155365648bbaf9be78be212ec78840abf7c4bc8638255b84cc0a83f1a4\" returns successfully" Jan 17 12:21:59.150073 containerd[1594]: time="2025-01-17T12:21:59.149919126Z" level=info msg="StartContainer for \"b4493deb6417a2f48dbbd58618d9058155965aa31b6825ad3bed0ae9d25e4b24\" returns successfully" Jan 17 12:21:59.163834 kubelet[2428]: I0117 12:21:59.161423 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:21:59.164312 kubelet[2428]: E0117 12:21:59.164245 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.73:6443/api/v1/nodes\": dial tcp 10.128.0.73:6443: connect: connection refused" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:00.785293 kubelet[2428]: I0117 12:22:00.785254 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:02.556804 kubelet[2428]: I0117 12:22:02.555363 2428 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:02.613816 kubelet[2428]: I0117 12:22:02.612532 2428 apiserver.go:52] "Watching apiserver" Jan 17 12:22:02.636684 kubelet[2428]: I0117 12:22:02.636634 2428 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:22:02.638512 kubelet[2428]: E0117 12:22:02.638415 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 17 12:22:05.163401 systemd[1]: Reloading requested from client PID 2703 ('systemctl') (unit session-7.scope)... Jan 17 12:22:05.163425 systemd[1]: Reloading... Jan 17 12:22:05.257849 zram_generator::config[2739]: No configuration found. Jan 17 12:22:05.433872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:22:05.541069 systemd[1]: Reloading finished in 376 ms. Jan 17 12:22:05.585204 kubelet[2428]: I0117 12:22:05.585109 2428 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:22:05.585188 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:05.604624 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:22:05.605228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:05.615922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:05.874168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:05.893628 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:22:05.988823 kubelet[2801]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:05.988823 kubelet[2801]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:22:05.988823 kubelet[2801]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:05.988823 kubelet[2801]: I0117 12:22:05.987943 2801 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:22:05.999512 kubelet[2801]: I0117 12:22:05.998651 2801 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:22:05.999512 kubelet[2801]: I0117 12:22:05.998686 2801 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:22:05.999512 kubelet[2801]: I0117 12:22:05.999053 2801 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:22:06.001390 kubelet[2801]: I0117 12:22:06.001364 2801 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:22:06.005632 kubelet[2801]: I0117 12:22:06.005234 2801 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:22:06.016235 kubelet[2801]: I0117 12:22:06.016180 2801 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:22:06.017969 kubelet[2801]: I0117 12:22:06.016981 2801 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:22:06.017969 kubelet[2801]: I0117 12:22:06.017272 2801 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:22:06.017969 kubelet[2801]: I0117 12:22:06.017320 2801 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:22:06.017969 kubelet[2801]: I0117 12:22:06.017339 2801 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:22:06.017969 kubelet[2801]: I0117 12:22:06.017396 2801 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:06.017969 kubelet[2801]: I0117 12:22:06.017557 2801 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:22:06.018440 kubelet[2801]: I0117 12:22:06.018096 2801 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:22:06.018440 kubelet[2801]: I0117 12:22:06.018149 2801 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:22:06.018440 kubelet[2801]: I0117 12:22:06.018224 2801 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:22:06.029864 kubelet[2801]: I0117 12:22:06.025923 2801 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:22:06.029864 kubelet[2801]: I0117 12:22:06.026211 2801 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:22:06.029864 kubelet[2801]: I0117 12:22:06.026788 2801 server.go:1256] "Started kubelet" Jan 17 12:22:06.040462 kubelet[2801]: I0117 12:22:06.040270 2801 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:22:06.053239 kubelet[2801]: I0117 12:22:06.052201 2801 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:22:06.053668 kubelet[2801]: I0117 12:22:06.053487 2801 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:22:06.060882 kubelet[2801]: I0117 12:22:06.060818 2801 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:22:06.061487 kubelet[2801]: I0117 12:22:06.061103 2801 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:22:06.072271 kubelet[2801]: I0117 12:22:06.067829 2801 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:22:06.072271 kubelet[2801]: I0117 12:22:06.068033 2801 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:22:06.072271 kubelet[2801]: I0117 12:22:06.068251 2801 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:22:06.079488 kubelet[2801]: I0117 12:22:06.079457 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:22:06.084953 kubelet[2801]: I0117 12:22:06.084927 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:22:06.085180 kubelet[2801]: I0117 12:22:06.085162 2801 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:22:06.085514 kubelet[2801]: I0117 12:22:06.085284 2801 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:22:06.085514 kubelet[2801]: E0117 12:22:06.085383 2801 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:22:06.087549 kubelet[2801]: E0117 12:22:06.085948 2801 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:22:06.096379 kubelet[2801]: I0117 12:22:06.096340 2801 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:22:06.096379 kubelet[2801]: I0117 12:22:06.096371 2801 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:22:06.096598 kubelet[2801]: I0117 12:22:06.096473 2801 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:22:06.177887 kubelet[2801]: I0117 12:22:06.177853 2801 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.191186 kubelet[2801]: E0117 12:22:06.191149 2801 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:22:06.196267 kubelet[2801]: I0117 12:22:06.196229 2801 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:22:06.196509 kubelet[2801]: I0117 12:22:06.196489 2801 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:22:06.196654 kubelet[2801]: I0117 12:22:06.196639 2801 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:06.197516 kubelet[2801]: I0117 12:22:06.197138 2801 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:22:06.197516 kubelet[2801]: I0117 12:22:06.197204 2801 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:22:06.197516 kubelet[2801]: I0117 12:22:06.197218 2801 policy_none.go:49] "None policy: Start" Jan 17 12:22:06.198367 kubelet[2801]: I0117 12:22:06.198170 2801 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.198367 kubelet[2801]: I0117 12:22:06.198268 2801 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.202024 kubelet[2801]: I0117 12:22:06.199031 2801 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:22:06.202024 kubelet[2801]: I0117 12:22:06.199066 2801 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:22:06.202024 kubelet[2801]: I0117 12:22:06.199346 2801 state_mem.go:75] "Updated machine memory state" Jan 17 12:22:06.202024 kubelet[2801]: I0117 12:22:06.201858 2801 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:22:06.202296 kubelet[2801]: I0117 12:22:06.202183 2801 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:22:06.393708 kubelet[2801]: I0117 12:22:06.392188 2801 topology_manager.go:215] "Topology Admit Handler" podUID="ba8d395adb481c3b1d33a93623eb9971" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.393708 kubelet[2801]: I0117 12:22:06.392339 2801 topology_manager.go:215] "Topology Admit Handler" podUID="03f209ae23a5cfbc352bdcaf86f8aac6" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.393708 kubelet[2801]: I0117 12:22:06.392396 2801 topology_manager.go:215] "Topology Admit Handler" podUID="c953ec82ab7698a51a44d5d1cd1a448d" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.403559 kubelet[2801]: W0117 12:22:06.402837 2801 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:22:06.404591 kubelet[2801]: W0117 12:22:06.404540 2801 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:22:06.404744 kubelet[2801]: W0117 12:22:06.404724 2801 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:22:06.471244 kubelet[2801]: I0117 12:22:06.470990 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba8d395adb481c3b1d33a93623eb9971-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"ba8d395adb481c3b1d33a93623eb9971\") " pod="kube-system/kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.471244 kubelet[2801]: I0117 12:22:06.471067 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba8d395adb481c3b1d33a93623eb9971-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"ba8d395adb481c3b1d33a93623eb9971\") " pod="kube-system/kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.472786 kubelet[2801]: I0117 12:22:06.471111 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03f209ae23a5cfbc352bdcaf86f8aac6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"03f209ae23a5cfbc352bdcaf86f8aac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.472786 kubelet[2801]: I0117 12:22:06.472367 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03f209ae23a5cfbc352bdcaf86f8aac6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"03f209ae23a5cfbc352bdcaf86f8aac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.472786 kubelet[2801]: I0117 12:22:06.472424 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03f209ae23a5cfbc352bdcaf86f8aac6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"03f209ae23a5cfbc352bdcaf86f8aac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.472786 kubelet[2801]: I0117 12:22:06.472492 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba8d395adb481c3b1d33a93623eb9971-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"ba8d395adb481c3b1d33a93623eb9971\") " pod="kube-system/kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.473068 kubelet[2801]: I0117 12:22:06.472531 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03f209ae23a5cfbc352bdcaf86f8aac6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"03f209ae23a5cfbc352bdcaf86f8aac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.473068 kubelet[2801]: I0117 12:22:06.472572 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03f209ae23a5cfbc352bdcaf86f8aac6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"03f209ae23a5cfbc352bdcaf86f8aac6\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:06.473068 kubelet[2801]: I0117 12:22:06.472610 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c953ec82ab7698a51a44d5d1cd1a448d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal\" (UID: \"c953ec82ab7698a51a44d5d1cd1a448d\") " pod="kube-system/kube-scheduler-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:07.021369 kubelet[2801]: I0117 12:22:07.021020 2801 apiserver.go:52] "Watching apiserver" Jan 17 12:22:07.068385 kubelet[2801]: I0117 12:22:07.068255 2801 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:22:07.225886 kubelet[2801]: I0117 12:22:07.225828 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" podStartSLOduration=1.225357714 podStartE2EDuration="1.225357714s" podCreationTimestamp="2025-01-17 12:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:07.224525405 +0000 UTC m=+1.323659403" watchObservedRunningTime="2025-01-17 12:22:07.225357714 +0000 UTC m=+1.324491713" Jan 17 12:22:07.291015 kubelet[2801]: I0117 12:22:07.289545 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" podStartSLOduration=1.289490615 podStartE2EDuration="1.289490615s" podCreationTimestamp="2025-01-17 12:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:07.264974231 +0000 UTC m=+1.364108229" watchObservedRunningTime="2025-01-17 12:22:07.289490615 +0000 UTC m=+1.388624595" Jan 17 12:22:07.314361 kubelet[2801]: I0117 12:22:07.313116 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" podStartSLOduration=1.313057081 podStartE2EDuration="1.313057081s" podCreationTimestamp="2025-01-17 12:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:07.292366495 +0000 UTC m=+1.391500492" watchObservedRunningTime="2025-01-17 12:22:07.313057081 +0000 UTC m=+1.412191077" Jan 17 12:22:10.028707 update_engine[1575]: I20250117 12:22:10.028163 1575 update_attempter.cc:509] Updating boot flags... Jan 17 12:22:10.118876 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2866) Jan 17 12:22:10.225733 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2862) Jan 17 12:22:10.336919 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2862) Jan 17 12:22:11.906333 sudo[1862]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:11.950335 sshd[1858]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:11.955384 systemd[1]: sshd@6-10.128.0.73:22-139.178.89.65:49160.service: Deactivated successfully. Jan 17 12:22:11.961468 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:22:11.962261 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:22:11.965112 systemd-logind[1570]: Removed session 7. Jan 17 12:22:20.427895 kubelet[2801]: I0117 12:22:20.427740 2801 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:22:20.431456 containerd[1594]: time="2025-01-17T12:22:20.431056546Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:22:20.434098 kubelet[2801]: I0117 12:22:20.432680 2801 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:22:20.679628 kubelet[2801]: I0117 12:22:20.679465 2801 topology_manager.go:215] "Topology Admit Handler" podUID="5f37f164-078b-4ca2-b696-2e354b70721c" podNamespace="kube-system" podName="kube-proxy-k8rwg" Jan 17 12:22:20.770288 kubelet[2801]: I0117 12:22:20.770046 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f37f164-078b-4ca2-b696-2e354b70721c-kube-proxy\") pod \"kube-proxy-k8rwg\" (UID: \"5f37f164-078b-4ca2-b696-2e354b70721c\") " pod="kube-system/kube-proxy-k8rwg" Jan 17 12:22:20.770288 kubelet[2801]: I0117 12:22:20.770105 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f37f164-078b-4ca2-b696-2e354b70721c-lib-modules\") pod \"kube-proxy-k8rwg\" (UID: \"5f37f164-078b-4ca2-b696-2e354b70721c\") " pod="kube-system/kube-proxy-k8rwg" Jan 17 12:22:20.770288 kubelet[2801]: I0117 12:22:20.770146 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6sgg\" (UniqueName: \"kubernetes.io/projected/5f37f164-078b-4ca2-b696-2e354b70721c-kube-api-access-j6sgg\") pod \"kube-proxy-k8rwg\" (UID: \"5f37f164-078b-4ca2-b696-2e354b70721c\") " pod="kube-system/kube-proxy-k8rwg" Jan 17 12:22:20.770288 kubelet[2801]: I0117 12:22:20.770187 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f37f164-078b-4ca2-b696-2e354b70721c-xtables-lock\") pod \"kube-proxy-k8rwg\" (UID: \"5f37f164-078b-4ca2-b696-2e354b70721c\") " pod="kube-system/kube-proxy-k8rwg" Jan 17 12:22:20.947756 kubelet[2801]: I0117 12:22:20.947088 2801 topology_manager.go:215] "Topology Admit Handler" podUID="74d72212-46f4-48bb-a0c6-b64f8533fd55" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-dnvzm" Jan 17 12:22:20.975250 kubelet[2801]: I0117 12:22:20.975012 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdzbh\" (UniqueName: \"kubernetes.io/projected/74d72212-46f4-48bb-a0c6-b64f8533fd55-kube-api-access-xdzbh\") pod \"tigera-operator-c7ccbd65-dnvzm\" (UID: \"74d72212-46f4-48bb-a0c6-b64f8533fd55\") " pod="tigera-operator/tigera-operator-c7ccbd65-dnvzm" Jan 17 12:22:20.975720 kubelet[2801]: I0117 12:22:20.975475 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/74d72212-46f4-48bb-a0c6-b64f8533fd55-var-lib-calico\") pod \"tigera-operator-c7ccbd65-dnvzm\" (UID: \"74d72212-46f4-48bb-a0c6-b64f8533fd55\") " pod="tigera-operator/tigera-operator-c7ccbd65-dnvzm" Jan 17 12:22:20.994413 containerd[1594]: time="2025-01-17T12:22:20.994347843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8rwg,Uid:5f37f164-078b-4ca2-b696-2e354b70721c,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:21.268530 containerd[1594]: time="2025-01-17T12:22:21.268365256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-dnvzm,Uid:74d72212-46f4-48bb-a0c6-b64f8533fd55,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:22:21.549598 containerd[1594]: time="2025-01-17T12:22:21.549028430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:21.549598 containerd[1594]: time="2025-01-17T12:22:21.549118036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:21.549598 containerd[1594]: time="2025-01-17T12:22:21.549192659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:21.550510 containerd[1594]: time="2025-01-17T12:22:21.549461454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:21.612841 containerd[1594]: time="2025-01-17T12:22:21.612762150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8rwg,Uid:5f37f164-078b-4ca2-b696-2e354b70721c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdd895a062082ac57a51ab78ee4390557ea956b97bbdb27d954f4711bc3de239\"" Jan 17 12:22:21.617402 containerd[1594]: time="2025-01-17T12:22:21.617350969Z" level=info msg="CreateContainer within sandbox \"fdd895a062082ac57a51ab78ee4390557ea956b97bbdb27d954f4711bc3de239\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:22:21.895793 systemd[1]: run-containerd-runc-k8s.io-fdd895a062082ac57a51ab78ee4390557ea956b97bbdb27d954f4711bc3de239-runc.s9SNph.mount: Deactivated successfully. Jan 17 12:22:21.915697 containerd[1594]: time="2025-01-17T12:22:21.915503153Z" level=info msg="CreateContainer within sandbox \"fdd895a062082ac57a51ab78ee4390557ea956b97bbdb27d954f4711bc3de239\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"587228fcffe4ed7cd1b439ff4f2fd3ae69691ad41322e10524c5623170a10b12\"" Jan 17 12:22:21.919465 containerd[1594]: time="2025-01-17T12:22:21.919308444Z" level=info msg="StartContainer for \"587228fcffe4ed7cd1b439ff4f2fd3ae69691ad41322e10524c5623170a10b12\"" Jan 17 12:22:21.935612 containerd[1594]: time="2025-01-17T12:22:21.935316382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:21.936517 containerd[1594]: time="2025-01-17T12:22:21.936388019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:21.936517 containerd[1594]: time="2025-01-17T12:22:21.936461526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:21.936957 containerd[1594]: time="2025-01-17T12:22:21.936892871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:22.048410 containerd[1594]: time="2025-01-17T12:22:22.047873231Z" level=info msg="StartContainer for \"587228fcffe4ed7cd1b439ff4f2fd3ae69691ad41322e10524c5623170a10b12\" returns successfully" Jan 17 12:22:22.069344 containerd[1594]: time="2025-01-17T12:22:22.069293488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-dnvzm,Uid:74d72212-46f4-48bb-a0c6-b64f8533fd55,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3ce7043b54980861c98fb8364c192d362f325d1728b8da2bf27993067adef107\"" Jan 17 12:22:22.072561 containerd[1594]: time="2025-01-17T12:22:22.072523756Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:22:26.926115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489684719.mount: Deactivated successfully. Jan 17 12:22:29.536928 containerd[1594]: time="2025-01-17T12:22:29.536854020Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:29.538458 containerd[1594]: time="2025-01-17T12:22:29.538387328Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764293" Jan 17 12:22:29.540142 containerd[1594]: time="2025-01-17T12:22:29.540075826Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:29.543446 containerd[1594]: time="2025-01-17T12:22:29.543381798Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:29.544696 containerd[1594]: time="2025-01-17T12:22:29.544479439Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 7.47011014s" Jan 17 12:22:29.544696 containerd[1594]: time="2025-01-17T12:22:29.544528419Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:22:29.547720 containerd[1594]: time="2025-01-17T12:22:29.547671589Z" level=info msg="CreateContainer within sandbox \"3ce7043b54980861c98fb8364c192d362f325d1728b8da2bf27993067adef107\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:22:29.567723 containerd[1594]: time="2025-01-17T12:22:29.567663212Z" level=info msg="CreateContainer within sandbox \"3ce7043b54980861c98fb8364c192d362f325d1728b8da2bf27993067adef107\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"816b126af5e00d752a645ad16d2f36672a3abd744c6cf207d44e4b792aa1248f\"" Jan 17 12:22:29.569859 containerd[1594]: time="2025-01-17T12:22:29.568432943Z" level=info msg="StartContainer for \"816b126af5e00d752a645ad16d2f36672a3abd744c6cf207d44e4b792aa1248f\"" Jan 17 12:22:29.569247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049650978.mount: Deactivated successfully. Jan 17 12:22:29.655990 containerd[1594]: time="2025-01-17T12:22:29.653716540Z" level=info msg="StartContainer for \"816b126af5e00d752a645ad16d2f36672a3abd744c6cf207d44e4b792aa1248f\" returns successfully" Jan 17 12:22:30.199585 kubelet[2801]: I0117 12:22:30.199446 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-k8rwg" podStartSLOduration=10.19938937 podStartE2EDuration="10.19938937s" podCreationTimestamp="2025-01-17 12:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:22.177965679 +0000 UTC m=+16.277099679" watchObservedRunningTime="2025-01-17 12:22:30.19938937 +0000 UTC m=+24.298523423" Jan 17 12:22:33.021234 kubelet[2801]: I0117 12:22:33.021175 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-dnvzm" podStartSLOduration=5.547161229 podStartE2EDuration="13.021101206s" podCreationTimestamp="2025-01-17 12:22:20 +0000 UTC" firstStartedPulling="2025-01-17 12:22:22.071295636 +0000 UTC m=+16.170429610" lastFinishedPulling="2025-01-17 12:22:29.545235596 +0000 UTC m=+23.644369587" observedRunningTime="2025-01-17 12:22:30.19999487 +0000 UTC m=+24.299128869" watchObservedRunningTime="2025-01-17 12:22:33.021101206 +0000 UTC m=+27.120235227" Jan 17 12:22:33.021996 kubelet[2801]: I0117 12:22:33.021372 2801 topology_manager.go:215] "Topology Admit Handler" podUID="3a8ddaf3-e70f-4624-82b1-89f7f8c2db13" podNamespace="calico-system" podName="calico-typha-678c9b84d6-5rnt6" Jan 17 12:22:33.061890 kubelet[2801]: I0117 12:22:33.061825 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtcrl\" (UniqueName: \"kubernetes.io/projected/3a8ddaf3-e70f-4624-82b1-89f7f8c2db13-kube-api-access-qtcrl\") pod \"calico-typha-678c9b84d6-5rnt6\" (UID: \"3a8ddaf3-e70f-4624-82b1-89f7f8c2db13\") " pod="calico-system/calico-typha-678c9b84d6-5rnt6" Jan 17 12:22:33.062176 kubelet[2801]: I0117 12:22:33.061933 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3a8ddaf3-e70f-4624-82b1-89f7f8c2db13-typha-certs\") pod \"calico-typha-678c9b84d6-5rnt6\" (UID: \"3a8ddaf3-e70f-4624-82b1-89f7f8c2db13\") " pod="calico-system/calico-typha-678c9b84d6-5rnt6" Jan 17 12:22:33.062176 kubelet[2801]: I0117 12:22:33.061974 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a8ddaf3-e70f-4624-82b1-89f7f8c2db13-tigera-ca-bundle\") pod \"calico-typha-678c9b84d6-5rnt6\" (UID: \"3a8ddaf3-e70f-4624-82b1-89f7f8c2db13\") " pod="calico-system/calico-typha-678c9b84d6-5rnt6" Jan 17 12:22:33.155136 kubelet[2801]: I0117 12:22:33.155090 2801 topology_manager.go:215] "Topology Admit Handler" podUID="ea99939d-cc21-4811-ba27-1ccbb6b9d23e" podNamespace="calico-system" podName="calico-node-vrkcb" Jan 17 12:22:33.265017 kubelet[2801]: I0117 12:22:33.264967 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnkkd\" (UniqueName: \"kubernetes.io/projected/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-kube-api-access-rnkkd\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.265222 kubelet[2801]: I0117 12:22:33.265050 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-xtables-lock\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.265222 kubelet[2801]: I0117 12:22:33.265091 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-lib-modules\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.265222 kubelet[2801]: I0117 12:22:33.265121 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-var-run-calico\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.265222 kubelet[2801]: I0117 12:22:33.265153 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-var-lib-calico\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.265222 kubelet[2801]: I0117 12:22:33.265185 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-cni-bin-dir\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.265493 kubelet[2801]: I0117 12:22:33.265216 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-cni-net-dir\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.265493 kubelet[2801]: I0117 12:22:33.265250 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-tigera-ca-bundle\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.265493 kubelet[2801]: I0117 12:22:33.265288 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-policysync\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.265493 kubelet[2801]: I0117 12:22:33.265327 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-node-certs\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.265493 kubelet[2801]: I0117 12:22:33.265366 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-flexvol-driver-host\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.265749 kubelet[2801]: I0117 12:22:33.265402 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ea99939d-cc21-4811-ba27-1ccbb6b9d23e-cni-log-dir\") pod \"calico-node-vrkcb\" (UID: \"ea99939d-cc21-4811-ba27-1ccbb6b9d23e\") " pod="calico-system/calico-node-vrkcb" Jan 17 12:22:33.272744 kubelet[2801]: I0117 12:22:33.270987 2801 topology_manager.go:215] "Topology Admit Handler" podUID="d681af9d-6a3e-41bc-9243-7f519ac5c8d3" podNamespace="calico-system" podName="csi-node-driver-btsnv" Jan 17 12:22:33.272744 kubelet[2801]: E0117 12:22:33.271391 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-btsnv" podUID="d681af9d-6a3e-41bc-9243-7f519ac5c8d3" Jan 17 12:22:33.338577 containerd[1594]: time="2025-01-17T12:22:33.338081449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-678c9b84d6-5rnt6,Uid:3a8ddaf3-e70f-4624-82b1-89f7f8c2db13,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:33.370799 kubelet[2801]: I0117 12:22:33.367017 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d681af9d-6a3e-41bc-9243-7f519ac5c8d3-registration-dir\") pod \"csi-node-driver-btsnv\" (UID: \"d681af9d-6a3e-41bc-9243-7f519ac5c8d3\") " pod="calico-system/csi-node-driver-btsnv" Jan 17 12:22:33.370799 kubelet[2801]: I0117 12:22:33.367129 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d681af9d-6a3e-41bc-9243-7f519ac5c8d3-varrun\") pod \"csi-node-driver-btsnv\" (UID: \"d681af9d-6a3e-41bc-9243-7f519ac5c8d3\") " pod="calico-system/csi-node-driver-btsnv" Jan 17 12:22:33.370799 kubelet[2801]: I0117 12:22:33.367257 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtwzg\" (UniqueName: \"kubernetes.io/projected/d681af9d-6a3e-41bc-9243-7f519ac5c8d3-kube-api-access-jtwzg\") pod \"csi-node-driver-btsnv\" (UID: \"d681af9d-6a3e-41bc-9243-7f519ac5c8d3\") " pod="calico-system/csi-node-driver-btsnv" Jan 17 12:22:33.370799 kubelet[2801]: I0117 12:22:33.367343 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d681af9d-6a3e-41bc-9243-7f519ac5c8d3-socket-dir\") pod \"csi-node-driver-btsnv\" (UID: \"d681af9d-6a3e-41bc-9243-7f519ac5c8d3\") " pod="calico-system/csi-node-driver-btsnv" Jan 17 12:22:33.370799 kubelet[2801]: I0117 12:22:33.367411 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d681af9d-6a3e-41bc-9243-7f519ac5c8d3-kubelet-dir\") pod \"csi-node-driver-btsnv\" (UID: \"d681af9d-6a3e-41bc-9243-7f519ac5c8d3\") " pod="calico-system/csi-node-driver-btsnv" Jan 17 12:22:33.395844 kubelet[2801]: E0117 12:22:33.393857 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.396119 kubelet[2801]: W0117 12:22:33.396085 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.398805 kubelet[2801]: E0117 12:22:33.396579 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.421459 kubelet[2801]: E0117 12:22:33.421426 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.421730 kubelet[2801]: W0117 12:22:33.421667 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.424191 kubelet[2801]: E0117 12:22:33.423864 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.445314 kubelet[2801]: E0117 12:22:33.445211 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.445314 kubelet[2801]: W0117 12:22:33.445258 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.445972 kubelet[2801]: E0117 12:22:33.445292 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.470375 kubelet[2801]: E0117 12:22:33.469886 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.470375 kubelet[2801]: W0117 12:22:33.469918 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.470375 kubelet[2801]: E0117 12:22:33.469968 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.470985 kubelet[2801]: E0117 12:22:33.470733 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.470985 kubelet[2801]: W0117 12:22:33.470752 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.470985 kubelet[2801]: E0117 12:22:33.470797 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.472001 kubelet[2801]: E0117 12:22:33.471704 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.472001 kubelet[2801]: W0117 12:22:33.471732 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.472001 kubelet[2801]: E0117 12:22:33.471758 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.472804 kubelet[2801]: E0117 12:22:33.472105 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.472804 kubelet[2801]: W0117 12:22:33.472120 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.472804 kubelet[2801]: E0117 12:22:33.472139 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.473013 containerd[1594]: time="2025-01-17T12:22:33.469289311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:33.473013 containerd[1594]: time="2025-01-17T12:22:33.469398255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:33.473013 containerd[1594]: time="2025-01-17T12:22:33.469423511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:33.473013 containerd[1594]: time="2025-01-17T12:22:33.469579997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:33.476196 kubelet[2801]: E0117 12:22:33.473894 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.476196 kubelet[2801]: W0117 12:22:33.473915 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.478202 kubelet[2801]: E0117 12:22:33.476963 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.478737 containerd[1594]: time="2025-01-17T12:22:33.478697120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrkcb,Uid:ea99939d-cc21-4811-ba27-1ccbb6b9d23e,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:33.479119 kubelet[2801]: E0117 12:22:33.479095 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.479676 kubelet[2801]: W0117 12:22:33.479447 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.479676 kubelet[2801]: E0117 12:22:33.479489 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.481086 kubelet[2801]: E0117 12:22:33.480450 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.481086 kubelet[2801]: W0117 12:22:33.480504 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.481086 kubelet[2801]: E0117 12:22:33.480529 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.481583 kubelet[2801]: E0117 12:22:33.481562 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.481830 kubelet[2801]: W0117 12:22:33.481803 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.482055 kubelet[2801]: E0117 12:22:33.482036 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.483068 kubelet[2801]: E0117 12:22:33.482835 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.483068 kubelet[2801]: W0117 12:22:33.482868 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.484806 kubelet[2801]: E0117 12:22:33.483427 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.485075 kubelet[2801]: E0117 12:22:33.485058 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.485192 kubelet[2801]: W0117 12:22:33.485176 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.485346 kubelet[2801]: E0117 12:22:33.485300 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.486276 kubelet[2801]: E0117 12:22:33.486022 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.486276 kubelet[2801]: W0117 12:22:33.486044 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.486276 kubelet[2801]: E0117 12:22:33.486104 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.486731 kubelet[2801]: E0117 12:22:33.486715 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.486941 kubelet[2801]: W0117 12:22:33.486920 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.487059 kubelet[2801]: E0117 12:22:33.487044 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.488527 kubelet[2801]: E0117 12:22:33.488479 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.489275 kubelet[2801]: W0117 12:22:33.488909 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.489830 kubelet[2801]: E0117 12:22:33.489576 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.491405 kubelet[2801]: E0117 12:22:33.490920 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.491405 kubelet[2801]: W0117 12:22:33.490941 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.491405 kubelet[2801]: E0117 12:22:33.491187 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.494607 kubelet[2801]: E0117 12:22:33.494355 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.494607 kubelet[2801]: W0117 12:22:33.494377 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.494607 kubelet[2801]: E0117 12:22:33.494407 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.495088 kubelet[2801]: E0117 12:22:33.494996 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.495088 kubelet[2801]: W0117 12:22:33.495012 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.498016 kubelet[2801]: E0117 12:22:33.496842 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.498016 kubelet[2801]: E0117 12:22:33.497182 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.498016 kubelet[2801]: W0117 12:22:33.497199 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.498016 kubelet[2801]: E0117 12:22:33.497330 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.498016 kubelet[2801]: E0117 12:22:33.497533 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.498016 kubelet[2801]: W0117 12:22:33.497547 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.498016 kubelet[2801]: E0117 12:22:33.497649 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.498502 kubelet[2801]: E0117 12:22:33.498487 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.498743 kubelet[2801]: W0117 12:22:33.498584 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.498743 kubelet[2801]: E0117 12:22:33.498709 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.499302 kubelet[2801]: E0117 12:22:33.499190 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.499302 kubelet[2801]: W0117 12:22:33.499208 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.500106 kubelet[2801]: E0117 12:22:33.500041 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.501365 kubelet[2801]: E0117 12:22:33.500760 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.501365 kubelet[2801]: W0117 12:22:33.500804 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.501365 kubelet[2801]: E0117 12:22:33.501234 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.502748 kubelet[2801]: E0117 12:22:33.502678 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.502748 kubelet[2801]: W0117 12:22:33.502697 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.504864 kubelet[2801]: E0117 12:22:33.504138 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.505922 kubelet[2801]: E0117 12:22:33.505097 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.505922 kubelet[2801]: W0117 12:22:33.505117 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.507165 kubelet[2801]: E0117 12:22:33.506473 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.508088 kubelet[2801]: E0117 12:22:33.507922 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.508088 kubelet[2801]: W0117 12:22:33.507940 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.509004 kubelet[2801]: E0117 12:22:33.508828 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.509004 kubelet[2801]: E0117 12:22:33.508939 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.509004 kubelet[2801]: W0117 12:22:33.508952 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.509004 kubelet[2801]: E0117 12:22:33.508973 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.560251 kubelet[2801]: E0117 12:22:33.559156 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:33.560251 kubelet[2801]: W0117 12:22:33.559183 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:33.560251 kubelet[2801]: E0117 12:22:33.559216 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:33.634300 containerd[1594]: time="2025-01-17T12:22:33.629013719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:33.634300 containerd[1594]: time="2025-01-17T12:22:33.630103526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:33.634300 containerd[1594]: time="2025-01-17T12:22:33.630128320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:33.634300 containerd[1594]: time="2025-01-17T12:22:33.630267980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:33.753140 containerd[1594]: time="2025-01-17T12:22:33.751360448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-678c9b84d6-5rnt6,Uid:3a8ddaf3-e70f-4624-82b1-89f7f8c2db13,Namespace:calico-system,Attempt:0,} returns sandbox id \"bcc4b87be2f1e5c1cef37bd599489d6f78c2b45213383493ccc0e8ee8c5451b5\"" Jan 17 12:22:33.756800 containerd[1594]: time="2025-01-17T12:22:33.755388724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:22:33.768375 containerd[1594]: time="2025-01-17T12:22:33.768280479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrkcb,Uid:ea99939d-cc21-4811-ba27-1ccbb6b9d23e,Namespace:calico-system,Attempt:0,} returns sandbox id \"01922a6ecfcc57247559d9055a74850666c7eeabf00360c188a14084c2a6675a\"" Jan 17 12:22:34.726211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3964897479.mount: Deactivated successfully. Jan 17 12:22:35.087661 kubelet[2801]: E0117 12:22:35.086062 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-btsnv" podUID="d681af9d-6a3e-41bc-9243-7f519ac5c8d3" Jan 17 12:22:35.592478 containerd[1594]: time="2025-01-17T12:22:35.592398888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:35.594460 containerd[1594]: time="2025-01-17T12:22:35.594262761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:22:35.596238 containerd[1594]: time="2025-01-17T12:22:35.596152255Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:35.599974 containerd[1594]: time="2025-01-17T12:22:35.599900730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:35.601240 containerd[1594]: time="2025-01-17T12:22:35.601048752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.845589399s" Jan 17 12:22:35.601240 containerd[1594]: time="2025-01-17T12:22:35.601096249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:22:35.604267 containerd[1594]: time="2025-01-17T12:22:35.603997826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:22:35.626716 containerd[1594]: time="2025-01-17T12:22:35.626583804Z" level=info msg="CreateContainer within sandbox \"bcc4b87be2f1e5c1cef37bd599489d6f78c2b45213383493ccc0e8ee8c5451b5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:22:35.649135 containerd[1594]: time="2025-01-17T12:22:35.649075095Z" level=info msg="CreateContainer within sandbox \"bcc4b87be2f1e5c1cef37bd599489d6f78c2b45213383493ccc0e8ee8c5451b5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a6790c8ef48b84056a0fd82b95f8741d1233563192bf839ccdbabce3ba5ba486\"" Jan 17 12:22:35.650993 containerd[1594]: time="2025-01-17T12:22:35.649932301Z" level=info msg="StartContainer for \"a6790c8ef48b84056a0fd82b95f8741d1233563192bf839ccdbabce3ba5ba486\"" Jan 17 12:22:35.764177 containerd[1594]: time="2025-01-17T12:22:35.764116276Z" level=info msg="StartContainer for \"a6790c8ef48b84056a0fd82b95f8741d1233563192bf839ccdbabce3ba5ba486\" returns successfully" Jan 17 12:22:36.236662 kubelet[2801]: I0117 12:22:36.234977 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-678c9b84d6-5rnt6" podStartSLOduration=1.3866339619999999 podStartE2EDuration="3.234921276s" podCreationTimestamp="2025-01-17 12:22:33 +0000 UTC" firstStartedPulling="2025-01-17 12:22:33.75346523 +0000 UTC m=+27.852599206" lastFinishedPulling="2025-01-17 12:22:35.601752533 +0000 UTC m=+29.700886520" observedRunningTime="2025-01-17 12:22:36.234824096 +0000 UTC m=+30.333958092" watchObservedRunningTime="2025-01-17 12:22:36.234921276 +0000 UTC m=+30.334055275" Jan 17 12:22:36.276357 kubelet[2801]: E0117 12:22:36.276317 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.276357 kubelet[2801]: W0117 12:22:36.276407 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.276357 kubelet[2801]: E0117 12:22:36.276448 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.277601 kubelet[2801]: E0117 12:22:36.277296 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.277601 kubelet[2801]: W0117 12:22:36.277320 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.277601 kubelet[2801]: E0117 12:22:36.277372 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.278093 kubelet[2801]: E0117 12:22:36.277948 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.278350 kubelet[2801]: W0117 12:22:36.277963 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.278350 kubelet[2801]: E0117 12:22:36.278219 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.278881 kubelet[2801]: E0117 12:22:36.278748 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.278881 kubelet[2801]: W0117 12:22:36.278766 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.278881 kubelet[2801]: E0117 12:22:36.278809 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.279704 kubelet[2801]: E0117 12:22:36.279552 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.279704 kubelet[2801]: W0117 12:22:36.279567 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.279704 kubelet[2801]: E0117 12:22:36.279588 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.280600 kubelet[2801]: E0117 12:22:36.280455 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.280600 kubelet[2801]: W0117 12:22:36.280471 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.280600 kubelet[2801]: E0117 12:22:36.280492 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.281185 kubelet[2801]: E0117 12:22:36.281048 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.281185 kubelet[2801]: W0117 12:22:36.281065 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.281185 kubelet[2801]: E0117 12:22:36.281083 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.281432 kubelet[2801]: E0117 12:22:36.281353 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.281432 kubelet[2801]: W0117 12:22:36.281369 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.281432 kubelet[2801]: E0117 12:22:36.281389 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.281736 kubelet[2801]: E0117 12:22:36.281693 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.281736 kubelet[2801]: W0117 12:22:36.281716 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.281736 kubelet[2801]: E0117 12:22:36.281738 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.282070 kubelet[2801]: E0117 12:22:36.282048 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.282070 kubelet[2801]: W0117 12:22:36.282067 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.282212 kubelet[2801]: E0117 12:22:36.282089 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.282377 kubelet[2801]: E0117 12:22:36.282348 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.282377 kubelet[2801]: W0117 12:22:36.282361 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.282499 kubelet[2801]: E0117 12:22:36.282382 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.282691 kubelet[2801]: E0117 12:22:36.282653 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.282691 kubelet[2801]: W0117 12:22:36.282673 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.282691 kubelet[2801]: E0117 12:22:36.282693 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.283042 kubelet[2801]: E0117 12:22:36.283021 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.283042 kubelet[2801]: W0117 12:22:36.283040 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.283167 kubelet[2801]: E0117 12:22:36.283059 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.283344 kubelet[2801]: E0117 12:22:36.283323 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.283344 kubelet[2801]: W0117 12:22:36.283341 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.283465 kubelet[2801]: E0117 12:22:36.283360 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.283727 kubelet[2801]: E0117 12:22:36.283685 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.283727 kubelet[2801]: W0117 12:22:36.283704 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.283727 kubelet[2801]: E0117 12:22:36.283726 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.308606 kubelet[2801]: E0117 12:22:36.308575 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.309092 kubelet[2801]: W0117 12:22:36.308881 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.309092 kubelet[2801]: E0117 12:22:36.308925 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.310229 kubelet[2801]: E0117 12:22:36.310020 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.310229 kubelet[2801]: W0117 12:22:36.310041 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.310229 kubelet[2801]: E0117 12:22:36.310065 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.310729 kubelet[2801]: E0117 12:22:36.310711 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.311055 kubelet[2801]: W0117 12:22:36.310861 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.311055 kubelet[2801]: E0117 12:22:36.310894 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.311473 kubelet[2801]: E0117 12:22:36.311456 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.311700 kubelet[2801]: W0117 12:22:36.311560 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.311700 kubelet[2801]: E0117 12:22:36.311588 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.312293 kubelet[2801]: E0117 12:22:36.312075 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.312293 kubelet[2801]: W0117 12:22:36.312150 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.312293 kubelet[2801]: E0117 12:22:36.312175 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.312740 kubelet[2801]: E0117 12:22:36.312724 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.313017 kubelet[2801]: W0117 12:22:36.312850 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.313017 kubelet[2801]: E0117 12:22:36.312879 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.314303 kubelet[2801]: E0117 12:22:36.313411 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.314303 kubelet[2801]: W0117 12:22:36.313430 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.314303 kubelet[2801]: E0117 12:22:36.313451 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.314303 kubelet[2801]: E0117 12:22:36.313996 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.314303 kubelet[2801]: W0117 12:22:36.314011 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.314303 kubelet[2801]: E0117 12:22:36.314033 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.314737 kubelet[2801]: E0117 12:22:36.314342 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.314737 kubelet[2801]: W0117 12:22:36.314355 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.314737 kubelet[2801]: E0117 12:22:36.314375 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.314737 kubelet[2801]: E0117 12:22:36.314662 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.314737 kubelet[2801]: W0117 12:22:36.314674 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.314737 kubelet[2801]: E0117 12:22:36.314693 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.315076 kubelet[2801]: E0117 12:22:36.315009 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.315076 kubelet[2801]: W0117 12:22:36.315022 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.315076 kubelet[2801]: E0117 12:22:36.315042 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.315387 kubelet[2801]: E0117 12:22:36.315322 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.315387 kubelet[2801]: W0117 12:22:36.315339 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.315387 kubelet[2801]: E0117 12:22:36.315357 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.316111 kubelet[2801]: E0117 12:22:36.316088 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.316111 kubelet[2801]: W0117 12:22:36.316111 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.316283 kubelet[2801]: E0117 12:22:36.316133 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.316492 kubelet[2801]: E0117 12:22:36.316470 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.316492 kubelet[2801]: W0117 12:22:36.316490 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.316713 kubelet[2801]: E0117 12:22:36.316511 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.316921 kubelet[2801]: E0117 12:22:36.316901 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.316921 kubelet[2801]: W0117 12:22:36.316921 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.317105 kubelet[2801]: E0117 12:22:36.316944 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.317261 kubelet[2801]: E0117 12:22:36.317232 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.317261 kubelet[2801]: W0117 12:22:36.317249 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.317401 kubelet[2801]: E0117 12:22:36.317268 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.317598 kubelet[2801]: E0117 12:22:36.317582 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.317598 kubelet[2801]: W0117 12:22:36.317597 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.317733 kubelet[2801]: E0117 12:22:36.317617 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.318250 kubelet[2801]: E0117 12:22:36.318229 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:36.318250 kubelet[2801]: W0117 12:22:36.318247 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:36.318407 kubelet[2801]: E0117 12:22:36.318267 2801 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:36.683840 containerd[1594]: time="2025-01-17T12:22:36.683696237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:36.685627 containerd[1594]: time="2025-01-17T12:22:36.685341155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:22:36.687262 containerd[1594]: time="2025-01-17T12:22:36.686979914Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:36.690144 containerd[1594]: time="2025-01-17T12:22:36.690066020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:36.692124 containerd[1594]: time="2025-01-17T12:22:36.691163096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.087110939s" Jan 17 12:22:36.692124 containerd[1594]: time="2025-01-17T12:22:36.691209934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:22:36.694149 containerd[1594]: time="2025-01-17T12:22:36.694110237Z" level=info msg="CreateContainer within sandbox \"01922a6ecfcc57247559d9055a74850666c7eeabf00360c188a14084c2a6675a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:22:36.718632 containerd[1594]: time="2025-01-17T12:22:36.718585278Z" level=info msg="CreateContainer within sandbox \"01922a6ecfcc57247559d9055a74850666c7eeabf00360c188a14084c2a6675a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a7b87874acfcf416e1d37e848d2b9e4765ec64fdb424a35207d039d08a024836\"" Jan 17 12:22:36.719444 containerd[1594]: time="2025-01-17T12:22:36.719407001Z" level=info msg="StartContainer for \"a7b87874acfcf416e1d37e848d2b9e4765ec64fdb424a35207d039d08a024836\"" Jan 17 12:22:36.826730 containerd[1594]: time="2025-01-17T12:22:36.826674263Z" level=info msg="StartContainer for \"a7b87874acfcf416e1d37e848d2b9e4765ec64fdb424a35207d039d08a024836\" returns successfully" Jan 17 12:22:36.889927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7b87874acfcf416e1d37e848d2b9e4765ec64fdb424a35207d039d08a024836-rootfs.mount: Deactivated successfully. Jan 17 12:22:37.086745 kubelet[2801]: E0117 12:22:37.086558 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-btsnv" podUID="d681af9d-6a3e-41bc-9243-7f519ac5c8d3" Jan 17 12:22:37.222942 kubelet[2801]: I0117 12:22:37.222903 2801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:22:37.501878 containerd[1594]: time="2025-01-17T12:22:37.501698689Z" level=info msg="shim disconnected" id=a7b87874acfcf416e1d37e848d2b9e4765ec64fdb424a35207d039d08a024836 namespace=k8s.io Jan 17 12:22:37.501878 containerd[1594]: time="2025-01-17T12:22:37.501826130Z" level=warning msg="cleaning up after shim disconnected" id=a7b87874acfcf416e1d37e848d2b9e4765ec64fdb424a35207d039d08a024836 namespace=k8s.io Jan 17 12:22:37.503556 containerd[1594]: time="2025-01-17T12:22:37.501841200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:38.248727 containerd[1594]: time="2025-01-17T12:22:38.248369216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:22:39.086315 kubelet[2801]: E0117 12:22:39.086252 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-btsnv" podUID="d681af9d-6a3e-41bc-9243-7f519ac5c8d3" Jan 17 12:22:41.087670 kubelet[2801]: E0117 12:22:41.086404 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-btsnv" podUID="d681af9d-6a3e-41bc-9243-7f519ac5c8d3" Jan 17 12:22:42.510289 containerd[1594]: time="2025-01-17T12:22:42.510223601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:42.511994 containerd[1594]: time="2025-01-17T12:22:42.511912642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:22:42.513820 containerd[1594]: time="2025-01-17T12:22:42.513719401Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:42.517289 containerd[1594]: time="2025-01-17T12:22:42.517221190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:42.518525 containerd[1594]: time="2025-01-17T12:22:42.518385854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.269956414s" Jan 17 12:22:42.518525 containerd[1594]: time="2025-01-17T12:22:42.518435239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:22:42.521034 containerd[1594]: time="2025-01-17T12:22:42.520987625Z" level=info msg="CreateContainer within sandbox \"01922a6ecfcc57247559d9055a74850666c7eeabf00360c188a14084c2a6675a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:22:42.543003 containerd[1594]: time="2025-01-17T12:22:42.542720529Z" level=info msg="CreateContainer within sandbox \"01922a6ecfcc57247559d9055a74850666c7eeabf00360c188a14084c2a6675a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c456e2885b1729a00e0e21d1bbdb18ef34441bc3f10b6210efe10fd7426d17ea\"" Jan 17 12:22:42.545821 containerd[1594]: time="2025-01-17T12:22:42.545202779Z" level=info msg="StartContainer for \"c456e2885b1729a00e0e21d1bbdb18ef34441bc3f10b6210efe10fd7426d17ea\"" Jan 17 12:22:42.634459 containerd[1594]: time="2025-01-17T12:22:42.634401330Z" level=info msg="StartContainer for \"c456e2885b1729a00e0e21d1bbdb18ef34441bc3f10b6210efe10fd7426d17ea\" returns successfully" Jan 17 12:22:43.089128 kubelet[2801]: E0117 12:22:43.089062 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-btsnv" podUID="d681af9d-6a3e-41bc-9243-7f519ac5c8d3" Jan 17 12:22:43.519955 containerd[1594]: time="2025-01-17T12:22:43.519894114Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:22:43.558597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c456e2885b1729a00e0e21d1bbdb18ef34441bc3f10b6210efe10fd7426d17ea-rootfs.mount: Deactivated successfully. Jan 17 12:22:43.573301 kubelet[2801]: I0117 12:22:43.572989 2801 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:22:43.611703 kubelet[2801]: I0117 12:22:43.607826 2801 topology_manager.go:215] "Topology Admit Handler" podUID="4dc87eb0-298c-4584-bb03-0f123c703b75" podNamespace="kube-system" podName="coredns-76f75df574-bz4n4" Jan 17 12:22:43.621155 kubelet[2801]: I0117 12:22:43.621105 2801 topology_manager.go:215] "Topology Admit Handler" podUID="ae2696f0-689c-48e8-ae28-02108fb8bde9" podNamespace="kube-system" podName="coredns-76f75df574-ksx62" Jan 17 12:22:43.636845 kubelet[2801]: I0117 12:22:43.631121 2801 topology_manager.go:215] "Topology Admit Handler" podUID="146c2e68-8348-4ef2-ad46-1657816350fb" podNamespace="calico-apiserver" podName="calico-apiserver-597ff87f5d-pnrsk" Jan 17 12:22:43.636845 kubelet[2801]: I0117 12:22:43.631551 2801 topology_manager.go:215] "Topology Admit Handler" podUID="ba8556b8-0ba5-4be1-a609-db203f9464c8" podNamespace="calico-apiserver" podName="calico-apiserver-597ff87f5d-kz498" Jan 17 12:22:43.636845 kubelet[2801]: I0117 12:22:43.631848 2801 topology_manager.go:215] "Topology Admit Handler" podUID="c51740fd-ec40-41a1-a974-57291e05645a" podNamespace="calico-system" podName="calico-kube-controllers-5559488d8d-gh4cl" Jan 17 12:22:43.668330 kubelet[2801]: I0117 12:22:43.668245 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqsx\" (UniqueName: \"kubernetes.io/projected/ae2696f0-689c-48e8-ae28-02108fb8bde9-kube-api-access-nqqsx\") pod \"coredns-76f75df574-ksx62\" (UID: \"ae2696f0-689c-48e8-ae28-02108fb8bde9\") " pod="kube-system/coredns-76f75df574-ksx62" Jan 17 12:22:43.668330 kubelet[2801]: I0117 12:22:43.668371 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sj24\" (UniqueName: \"kubernetes.io/projected/ba8556b8-0ba5-4be1-a609-db203f9464c8-kube-api-access-5sj24\") pod \"calico-apiserver-597ff87f5d-kz498\" (UID: \"ba8556b8-0ba5-4be1-a609-db203f9464c8\") " pod="calico-apiserver/calico-apiserver-597ff87f5d-kz498" Jan 17 12:22:43.669261 kubelet[2801]: I0117 12:22:43.668419 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjpqw\" (UniqueName: \"kubernetes.io/projected/c51740fd-ec40-41a1-a974-57291e05645a-kube-api-access-wjpqw\") pod \"calico-kube-controllers-5559488d8d-gh4cl\" (UID: \"c51740fd-ec40-41a1-a974-57291e05645a\") " pod="calico-system/calico-kube-controllers-5559488d8d-gh4cl" Jan 17 12:22:43.669261 kubelet[2801]: I0117 12:22:43.668466 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ba8556b8-0ba5-4be1-a609-db203f9464c8-calico-apiserver-certs\") pod \"calico-apiserver-597ff87f5d-kz498\" (UID: \"ba8556b8-0ba5-4be1-a609-db203f9464c8\") " pod="calico-apiserver/calico-apiserver-597ff87f5d-kz498" Jan 17 12:22:43.669261 kubelet[2801]: I0117 12:22:43.668498 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c51740fd-ec40-41a1-a974-57291e05645a-tigera-ca-bundle\") pod \"calico-kube-controllers-5559488d8d-gh4cl\" (UID: \"c51740fd-ec40-41a1-a974-57291e05645a\") " pod="calico-system/calico-kube-controllers-5559488d8d-gh4cl" Jan 17 12:22:43.669261 kubelet[2801]: I0117 12:22:43.668536 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5pd8\" (UniqueName: \"kubernetes.io/projected/146c2e68-8348-4ef2-ad46-1657816350fb-kube-api-access-r5pd8\") pod \"calico-apiserver-597ff87f5d-pnrsk\" (UID: \"146c2e68-8348-4ef2-ad46-1657816350fb\") " pod="calico-apiserver/calico-apiserver-597ff87f5d-pnrsk" Jan 17 12:22:43.669261 kubelet[2801]: I0117 12:22:43.668571 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dc87eb0-298c-4584-bb03-0f123c703b75-config-volume\") pod \"coredns-76f75df574-bz4n4\" (UID: \"4dc87eb0-298c-4584-bb03-0f123c703b75\") " pod="kube-system/coredns-76f75df574-bz4n4" Jan 17 12:22:43.669624 kubelet[2801]: I0117 12:22:43.668653 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/146c2e68-8348-4ef2-ad46-1657816350fb-calico-apiserver-certs\") pod \"calico-apiserver-597ff87f5d-pnrsk\" (UID: \"146c2e68-8348-4ef2-ad46-1657816350fb\") " pod="calico-apiserver/calico-apiserver-597ff87f5d-pnrsk" Jan 17 12:22:43.669624 kubelet[2801]: I0117 12:22:43.668701 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzdcp\" (UniqueName: \"kubernetes.io/projected/4dc87eb0-298c-4584-bb03-0f123c703b75-kube-api-access-bzdcp\") pod \"coredns-76f75df574-bz4n4\" (UID: \"4dc87eb0-298c-4584-bb03-0f123c703b75\") " pod="kube-system/coredns-76f75df574-bz4n4" Jan 17 12:22:43.669624 kubelet[2801]: I0117 12:22:43.668757 2801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae2696f0-689c-48e8-ae28-02108fb8bde9-config-volume\") pod \"coredns-76f75df574-ksx62\" (UID: \"ae2696f0-689c-48e8-ae28-02108fb8bde9\") " pod="kube-system/coredns-76f75df574-ksx62" Jan 17 12:22:43.927986 containerd[1594]: time="2025-01-17T12:22:43.927935744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bz4n4,Uid:4dc87eb0-298c-4584-bb03-0f123c703b75,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:43.938090 containerd[1594]: time="2025-01-17T12:22:43.938010792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5559488d8d-gh4cl,Uid:c51740fd-ec40-41a1-a974-57291e05645a,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:43.942732 containerd[1594]: time="2025-01-17T12:22:43.942682451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ksx62,Uid:ae2696f0-689c-48e8-ae28-02108fb8bde9,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:43.944537 containerd[1594]: time="2025-01-17T12:22:43.944490581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597ff87f5d-pnrsk,Uid:146c2e68-8348-4ef2-ad46-1657816350fb,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:22:43.951704 containerd[1594]: time="2025-01-17T12:22:43.951647362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597ff87f5d-kz498,Uid:ba8556b8-0ba5-4be1-a609-db203f9464c8,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:22:44.319763 containerd[1594]: time="2025-01-17T12:22:44.319430065Z" level=info msg="shim disconnected" id=c456e2885b1729a00e0e21d1bbdb18ef34441bc3f10b6210efe10fd7426d17ea namespace=k8s.io Jan 17 12:22:44.319763 containerd[1594]: time="2025-01-17T12:22:44.319531139Z" level=warning msg="cleaning up after shim disconnected" id=c456e2885b1729a00e0e21d1bbdb18ef34441bc3f10b6210efe10fd7426d17ea namespace=k8s.io Jan 17 12:22:44.319763 containerd[1594]: time="2025-01-17T12:22:44.319547693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:44.642003 containerd[1594]: time="2025-01-17T12:22:44.641500685Z" level=error msg="Failed to destroy network for sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.649910 containerd[1594]: time="2025-01-17T12:22:44.646999921Z" level=error msg="encountered an error cleaning up failed sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.649910 containerd[1594]: time="2025-01-17T12:22:44.648974398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bz4n4,Uid:4dc87eb0-298c-4584-bb03-0f123c703b75,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.649691 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85-shm.mount: Deactivated successfully. Jan 17 12:22:44.650578 kubelet[2801]: E0117 12:22:44.650222 2801 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.650578 kubelet[2801]: E0117 12:22:44.650323 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bz4n4" Jan 17 12:22:44.650578 kubelet[2801]: E0117 12:22:44.650361 2801 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bz4n4" Jan 17 12:22:44.655749 kubelet[2801]: E0117 12:22:44.650471 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-bz4n4_kube-system(4dc87eb0-298c-4584-bb03-0f123c703b75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-bz4n4_kube-system(4dc87eb0-298c-4584-bb03-0f123c703b75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bz4n4" podUID="4dc87eb0-298c-4584-bb03-0f123c703b75" Jan 17 12:22:44.656198 containerd[1594]: time="2025-01-17T12:22:44.652183110Z" level=error msg="Failed to destroy network for sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.659615 containerd[1594]: time="2025-01-17T12:22:44.659448600Z" level=error msg="encountered an error cleaning up failed sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.659817 containerd[1594]: time="2025-01-17T12:22:44.659653272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5559488d8d-gh4cl,Uid:c51740fd-ec40-41a1-a974-57291e05645a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.660246 kubelet[2801]: E0117 12:22:44.660212 2801 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.660330 kubelet[2801]: E0117 12:22:44.660304 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5559488d8d-gh4cl" Jan 17 12:22:44.661926 kubelet[2801]: E0117 12:22:44.660340 2801 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5559488d8d-gh4cl" Jan 17 12:22:44.661926 kubelet[2801]: E0117 12:22:44.660437 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5559488d8d-gh4cl_calico-system(c51740fd-ec40-41a1-a974-57291e05645a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5559488d8d-gh4cl_calico-system(c51740fd-ec40-41a1-a974-57291e05645a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5559488d8d-gh4cl" podUID="c51740fd-ec40-41a1-a974-57291e05645a" Jan 17 12:22:44.666412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806-shm.mount: Deactivated successfully. Jan 17 12:22:44.669166 containerd[1594]: time="2025-01-17T12:22:44.668526411Z" level=error msg="Failed to destroy network for sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.674305 containerd[1594]: time="2025-01-17T12:22:44.673150500Z" level=error msg="encountered an error cleaning up failed sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.675700 containerd[1594]: time="2025-01-17T12:22:44.674055929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597ff87f5d-pnrsk,Uid:146c2e68-8348-4ef2-ad46-1657816350fb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.676332 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f-shm.mount: Deactivated successfully. Jan 17 12:22:44.679253 kubelet[2801]: E0117 12:22:44.679215 2801 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.679412 kubelet[2801]: E0117 12:22:44.679306 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597ff87f5d-pnrsk" Jan 17 12:22:44.679412 kubelet[2801]: E0117 12:22:44.679340 2801 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597ff87f5d-pnrsk" Jan 17 12:22:44.679532 kubelet[2801]: E0117 12:22:44.679425 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-597ff87f5d-pnrsk_calico-apiserver(146c2e68-8348-4ef2-ad46-1657816350fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-597ff87f5d-pnrsk_calico-apiserver(146c2e68-8348-4ef2-ad46-1657816350fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-597ff87f5d-pnrsk" podUID="146c2e68-8348-4ef2-ad46-1657816350fb" Jan 17 12:22:44.698819 containerd[1594]: time="2025-01-17T12:22:44.696475707Z" level=error msg="Failed to destroy network for sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.698819 containerd[1594]: time="2025-01-17T12:22:44.697480980Z" level=error msg="encountered an error cleaning up failed sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.698819 containerd[1594]: time="2025-01-17T12:22:44.697555122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597ff87f5d-kz498,Uid:ba8556b8-0ba5-4be1-a609-db203f9464c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.699197 kubelet[2801]: E0117 12:22:44.697899 2801 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.699197 kubelet[2801]: E0117 12:22:44.697970 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597ff87f5d-kz498" Jan 17 12:22:44.699197 kubelet[2801]: E0117 12:22:44.698033 2801 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597ff87f5d-kz498" Jan 17 12:22:44.702122 kubelet[2801]: E0117 12:22:44.698114 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-597ff87f5d-kz498_calico-apiserver(ba8556b8-0ba5-4be1-a609-db203f9464c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-597ff87f5d-kz498_calico-apiserver(ba8556b8-0ba5-4be1-a609-db203f9464c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-597ff87f5d-kz498" podUID="ba8556b8-0ba5-4be1-a609-db203f9464c8" Jan 17 12:22:44.705554 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6-shm.mount: Deactivated successfully. Jan 17 12:22:44.708699 containerd[1594]: time="2025-01-17T12:22:44.708619388Z" level=error msg="Failed to destroy network for sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.709142 containerd[1594]: time="2025-01-17T12:22:44.709068963Z" level=error msg="encountered an error cleaning up failed sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.709274 containerd[1594]: time="2025-01-17T12:22:44.709156600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ksx62,Uid:ae2696f0-689c-48e8-ae28-02108fb8bde9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.709579 kubelet[2801]: E0117 12:22:44.709545 2801 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:44.709700 kubelet[2801]: E0117 12:22:44.709627 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ksx62" Jan 17 12:22:44.709700 kubelet[2801]: E0117 12:22:44.709668 2801 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ksx62" Jan 17 12:22:44.709994 kubelet[2801]: E0117 12:22:44.709917 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ksx62_kube-system(ae2696f0-689c-48e8-ae28-02108fb8bde9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ksx62_kube-system(ae2696f0-689c-48e8-ae28-02108fb8bde9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ksx62" podUID="ae2696f0-689c-48e8-ae28-02108fb8bde9" Jan 17 12:22:45.089499 containerd[1594]: time="2025-01-17T12:22:45.089288773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-btsnv,Uid:d681af9d-6a3e-41bc-9243-7f519ac5c8d3,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:45.173654 containerd[1594]: time="2025-01-17T12:22:45.173358883Z" level=error msg="Failed to destroy network for sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:45.174207 containerd[1594]: time="2025-01-17T12:22:45.174160210Z" level=error msg="encountered an error cleaning up failed sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:45.174341 containerd[1594]: time="2025-01-17T12:22:45.174249940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-btsnv,Uid:d681af9d-6a3e-41bc-9243-7f519ac5c8d3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:45.174621 kubelet[2801]: E0117 12:22:45.174551 2801 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:45.174621 kubelet[2801]: E0117 12:22:45.174622 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-btsnv" Jan 17 12:22:45.174822 kubelet[2801]: E0117 12:22:45.174655 2801 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-btsnv" Jan 17 12:22:45.174822 kubelet[2801]: E0117 12:22:45.174737 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-btsnv_calico-system(d681af9d-6a3e-41bc-9243-7f519ac5c8d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-btsnv_calico-system(d681af9d-6a3e-41bc-9243-7f519ac5c8d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-btsnv" podUID="d681af9d-6a3e-41bc-9243-7f519ac5c8d3" Jan 17 12:22:45.271387 kubelet[2801]: I0117 12:22:45.271350 2801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:22:45.274133 containerd[1594]: time="2025-01-17T12:22:45.273055991Z" level=info msg="StopPodSandbox for \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\"" Jan 17 12:22:45.274133 containerd[1594]: time="2025-01-17T12:22:45.273296097Z" level=info msg="Ensure that sandbox 73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050 in task-service has been cleanup successfully" Jan 17 12:22:45.274382 kubelet[2801]: I0117 12:22:45.273516 2801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:22:45.275056 containerd[1594]: time="2025-01-17T12:22:45.274862625Z" level=info msg="StopPodSandbox for \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\"" Jan 17 12:22:45.275556 containerd[1594]: time="2025-01-17T12:22:45.275415176Z" level=info msg="Ensure that sandbox 55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806 in task-service has been cleanup successfully" Jan 17 12:22:45.283669 kubelet[2801]: I0117 12:22:45.282754 2801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:22:45.284804 containerd[1594]: time="2025-01-17T12:22:45.284668314Z" level=info msg="StopPodSandbox for \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\"" Jan 17 12:22:45.285861 containerd[1594]: time="2025-01-17T12:22:45.284962060Z" level=info msg="Ensure that sandbox 9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85 in task-service has been cleanup successfully" Jan 17 12:22:45.296321 containerd[1594]: time="2025-01-17T12:22:45.296273512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:22:45.302836 kubelet[2801]: I0117 12:22:45.301984 2801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:22:45.303801 containerd[1594]: time="2025-01-17T12:22:45.303721151Z" level=info msg="StopPodSandbox for \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\"" Jan 17 12:22:45.304939 containerd[1594]: time="2025-01-17T12:22:45.304898604Z" level=info msg="Ensure that sandbox 29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6 in task-service has been cleanup successfully" Jan 17 12:22:45.309755 kubelet[2801]: I0117 12:22:45.309695 2801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:22:45.314524 containerd[1594]: time="2025-01-17T12:22:45.312131949Z" level=info msg="StopPodSandbox for \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\"" Jan 17 12:22:45.314524 containerd[1594]: time="2025-01-17T12:22:45.312347199Z" level=info msg="Ensure that sandbox 955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f in task-service has been cleanup successfully" Jan 17 12:22:45.318431 kubelet[2801]: I0117 12:22:45.318404 2801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:22:45.326747 containerd[1594]: time="2025-01-17T12:22:45.326316377Z" level=info msg="StopPodSandbox for \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\"" Jan 17 12:22:45.326747 containerd[1594]: time="2025-01-17T12:22:45.326564236Z" level=info msg="Ensure that sandbox 146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01 in task-service has been cleanup successfully" Jan 17 12:22:45.450100 containerd[1594]: time="2025-01-17T12:22:45.449972005Z" level=error msg="StopPodSandbox for \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\" failed" error="failed to destroy network for sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:45.450423 kubelet[2801]: E0117 12:22:45.450363 2801 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:22:45.450510 kubelet[2801]: E0117 12:22:45.450470 2801 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85"} Jan 17 12:22:45.450579 kubelet[2801]: E0117 12:22:45.450533 2801 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4dc87eb0-298c-4584-bb03-0f123c703b75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:45.450700 kubelet[2801]: E0117 12:22:45.450583 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4dc87eb0-298c-4584-bb03-0f123c703b75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bz4n4" podUID="4dc87eb0-298c-4584-bb03-0f123c703b75" Jan 17 12:22:45.478104 containerd[1594]: time="2025-01-17T12:22:45.478042582Z" level=error msg="StopPodSandbox for \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\" failed" error="failed to destroy network for sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:45.478624 kubelet[2801]: E0117 12:22:45.478584 2801 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:22:45.478757 kubelet[2801]: E0117 12:22:45.478648 2801 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6"} Jan 17 12:22:45.478757 kubelet[2801]: E0117 12:22:45.478702 2801 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba8556b8-0ba5-4be1-a609-db203f9464c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:45.478757 kubelet[2801]: E0117 12:22:45.478744 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba8556b8-0ba5-4be1-a609-db203f9464c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-597ff87f5d-kz498" podUID="ba8556b8-0ba5-4be1-a609-db203f9464c8" Jan 17 12:22:45.490335 containerd[1594]: time="2025-01-17T12:22:45.490120407Z" level=error msg="StopPodSandbox for \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\" failed" error="failed to destroy network for sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:45.490536 kubelet[2801]: E0117 12:22:45.490446 2801 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:22:45.490536 kubelet[2801]: E0117 12:22:45.490501 2801 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806"} Jan 17 12:22:45.490974 kubelet[2801]: E0117 12:22:45.490559 2801 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c51740fd-ec40-41a1-a974-57291e05645a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:45.490974 kubelet[2801]: E0117 12:22:45.490613 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c51740fd-ec40-41a1-a974-57291e05645a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5559488d8d-gh4cl" podUID="c51740fd-ec40-41a1-a974-57291e05645a" Jan 17 12:22:45.497792 containerd[1594]: time="2025-01-17T12:22:45.497672194Z" level=error msg="StopPodSandbox for \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\" failed" error="failed to destroy network for sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:45.498414 kubelet[2801]: E0117 12:22:45.498308 2801 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:22:45.498414 kubelet[2801]: E0117 12:22:45.498372 2801 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01"} Jan 17 12:22:45.498957 kubelet[2801]: E0117 12:22:45.498430 2801 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d681af9d-6a3e-41bc-9243-7f519ac5c8d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:45.498957 kubelet[2801]: E0117 12:22:45.498476 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d681af9d-6a3e-41bc-9243-7f519ac5c8d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-btsnv" podUID="d681af9d-6a3e-41bc-9243-7f519ac5c8d3" Jan 17 12:22:45.502664 containerd[1594]: time="2025-01-17T12:22:45.502613558Z" level=error msg="StopPodSandbox for \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\" failed" error="failed to destroy network for sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:45.502912 containerd[1594]: time="2025-01-17T12:22:45.502685586Z" level=error msg="StopPodSandbox for \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\" failed" error="failed to destroy network for sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:45.503048 kubelet[2801]: E0117 12:22:45.503021 2801 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:22:45.503131 kubelet[2801]: E0117 12:22:45.503073 2801 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050"} Jan 17 12:22:45.503131 kubelet[2801]: E0117 12:22:45.503129 2801 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae2696f0-689c-48e8-ae28-02108fb8bde9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:45.503283 kubelet[2801]: E0117 12:22:45.503249 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae2696f0-689c-48e8-ae28-02108fb8bde9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ksx62" podUID="ae2696f0-689c-48e8-ae28-02108fb8bde9" Jan 17 12:22:45.503381 kubelet[2801]: E0117 12:22:45.503333 2801 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:22:45.503439 kubelet[2801]: E0117 12:22:45.503385 2801 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f"} Jan 17 12:22:45.503502 kubelet[2801]: E0117 12:22:45.503438 2801 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"146c2e68-8348-4ef2-ad46-1657816350fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:45.503592 kubelet[2801]: E0117 12:22:45.503501 2801 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"146c2e68-8348-4ef2-ad46-1657816350fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-597ff87f5d-pnrsk" podUID="146c2e68-8348-4ef2-ad46-1657816350fb" Jan 17 12:22:45.555413 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050-shm.mount: Deactivated successfully. Jan 17 12:22:46.492829 kubelet[2801]: I0117 12:22:46.492742 2801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:22:51.776845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072408451.mount: Deactivated successfully. Jan 17 12:22:51.820650 containerd[1594]: time="2025-01-17T12:22:51.820580356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:51.822374 containerd[1594]: time="2025-01-17T12:22:51.822281560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:22:51.824381 containerd[1594]: time="2025-01-17T12:22:51.824300775Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:51.827738 containerd[1594]: time="2025-01-17T12:22:51.827631738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:51.828738 containerd[1594]: time="2025-01-17T12:22:51.828553554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.532219978s" Jan 17 12:22:51.828738 containerd[1594]: time="2025-01-17T12:22:51.828607256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:22:51.842728 containerd[1594]: time="2025-01-17T12:22:51.842649012Z" level=info msg="CreateContainer within sandbox \"01922a6ecfcc57247559d9055a74850666c7eeabf00360c188a14084c2a6675a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:22:51.870335 containerd[1594]: time="2025-01-17T12:22:51.870238445Z" level=info msg="CreateContainer within sandbox \"01922a6ecfcc57247559d9055a74850666c7eeabf00360c188a14084c2a6675a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"13ff6a13e7a4294c73d15bd00a7c5d6b8d062290fe5f3ef395717a0a5815f6ee\"" Jan 17 12:22:51.874857 containerd[1594]: time="2025-01-17T12:22:51.872254450Z" level=info msg="StartContainer for \"13ff6a13e7a4294c73d15bd00a7c5d6b8d062290fe5f3ef395717a0a5815f6ee\"" Jan 17 12:22:51.957216 containerd[1594]: time="2025-01-17T12:22:51.957145238Z" level=info msg="StartContainer for \"13ff6a13e7a4294c73d15bd00a7c5d6b8d062290fe5f3ef395717a0a5815f6ee\" returns successfully" Jan 17 12:22:52.075119 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:22:52.075320 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:22:53.813821 kernel: bpftool[4004]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:22:54.187301 systemd-networkd[1217]: vxlan.calico: Link UP Jan 17 12:22:54.187330 systemd-networkd[1217]: vxlan.calico: Gained carrier Jan 17 12:22:55.624059 systemd-networkd[1217]: vxlan.calico: Gained IPv6LL Jan 17 12:22:56.090800 containerd[1594]: time="2025-01-17T12:22:56.089541107Z" level=info msg="StopPodSandbox for \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\"" Jan 17 12:22:56.093530 containerd[1594]: time="2025-01-17T12:22:56.093372159Z" level=info msg="StopPodSandbox for \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\"" Jan 17 12:22:56.193223 kubelet[2801]: I0117 12:22:56.192999 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-vrkcb" podStartSLOduration=5.136378299 podStartE2EDuration="23.192879878s" podCreationTimestamp="2025-01-17 12:22:33 +0000 UTC" firstStartedPulling="2025-01-17 12:22:33.772460924 +0000 UTC m=+27.871594908" lastFinishedPulling="2025-01-17 12:22:51.828962497 +0000 UTC m=+45.928096487" observedRunningTime="2025-01-17 12:22:52.377565861 +0000 UTC m=+46.476699859" watchObservedRunningTime="2025-01-17 12:22:56.192879878 +0000 UTC m=+50.292013880" Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.186 [INFO][4122] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.188 [INFO][4122] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" iface="eth0" netns="/var/run/netns/cni-e566e5b7-0556-7586-bf29-1cfae5716d5f" Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.189 [INFO][4122] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" iface="eth0" netns="/var/run/netns/cni-e566e5b7-0556-7586-bf29-1cfae5716d5f" Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.190 [INFO][4122] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" iface="eth0" netns="/var/run/netns/cni-e566e5b7-0556-7586-bf29-1cfae5716d5f" Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.190 [INFO][4122] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.191 [INFO][4122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.235 [INFO][4136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" HandleID="k8s-pod-network.9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.236 [INFO][4136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.236 [INFO][4136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.246 [WARNING][4136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" HandleID="k8s-pod-network.9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.246 [INFO][4136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" HandleID="k8s-pod-network.9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.248 [INFO][4136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:56.253805 containerd[1594]: 2025-01-17 12:22:56.252 [INFO][4122] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:22:56.259150 containerd[1594]: time="2025-01-17T12:22:56.256723896Z" level=info msg="TearDown network for sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\" successfully" Jan 17 12:22:56.259150 containerd[1594]: time="2025-01-17T12:22:56.256932714Z" level=info msg="StopPodSandbox for \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\" returns successfully" Jan 17 12:22:56.263246 systemd[1]: run-netns-cni\x2de566e5b7\x2d0556\x2d7586\x2dbf29\x2d1cfae5716d5f.mount: Deactivated successfully. Jan 17 12:22:56.265869 containerd[1594]: time="2025-01-17T12:22:56.265722841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bz4n4,Uid:4dc87eb0-298c-4584-bb03-0f123c703b75,Namespace:kube-system,Attempt:1,}" Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.191 [INFO][4123] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.193 [INFO][4123] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" iface="eth0" netns="/var/run/netns/cni-bd337c61-38c7-cb3b-1359-f5e65cea70ee" Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.195 [INFO][4123] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" iface="eth0" netns="/var/run/netns/cni-bd337c61-38c7-cb3b-1359-f5e65cea70ee" Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.195 [INFO][4123] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" iface="eth0" netns="/var/run/netns/cni-bd337c61-38c7-cb3b-1359-f5e65cea70ee" Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.195 [INFO][4123] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.195 [INFO][4123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.243 [INFO][4137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" HandleID="k8s-pod-network.73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.243 [INFO][4137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.248 [INFO][4137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.258 [WARNING][4137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" HandleID="k8s-pod-network.73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.259 [INFO][4137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" HandleID="k8s-pod-network.73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.267 [INFO][4137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:56.271792 containerd[1594]: 2025-01-17 12:22:56.269 [INFO][4123] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:22:56.272519 containerd[1594]: time="2025-01-17T12:22:56.272000283Z" level=info msg="TearDown network for sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\" successfully" Jan 17 12:22:56.272519 containerd[1594]: time="2025-01-17T12:22:56.272035421Z" level=info msg="StopPodSandbox for \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\" returns successfully" Jan 17 12:22:56.274277 containerd[1594]: time="2025-01-17T12:22:56.274234427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ksx62,Uid:ae2696f0-689c-48e8-ae28-02108fb8bde9,Namespace:kube-system,Attempt:1,}" Jan 17 12:22:56.280428 systemd[1]: run-netns-cni\x2dbd337c61\x2d38c7\x2dcb3b\x2d1359\x2df5e65cea70ee.mount: Deactivated successfully. Jan 17 12:22:56.520844 systemd-networkd[1217]: cali3fc3c383293: Link UP Jan 17 12:22:56.523536 systemd-networkd[1217]: cali3fc3c383293: Gained carrier Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.392 [INFO][4148] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0 coredns-76f75df574- kube-system 4dc87eb0-298c-4584-bb03-0f123c703b75 747 0 2025-01-17 12:22:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal coredns-76f75df574-bz4n4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3fc3c383293 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" Namespace="kube-system" Pod="coredns-76f75df574-bz4n4" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.393 [INFO][4148] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" Namespace="kube-system" Pod="coredns-76f75df574-bz4n4" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.467 [INFO][4171] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" HandleID="k8s-pod-network.12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.481 [INFO][4171] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" HandleID="k8s-pod-network.12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed880), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", "pod":"coredns-76f75df574-bz4n4", "timestamp":"2025-01-17 12:22:56.467560605 +0000 UTC"}, Hostname:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.481 [INFO][4171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.481 [INFO][4171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.481 [INFO][4171] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal' Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.483 [INFO][4171] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.488 [INFO][4171] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.493 [INFO][4171] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.495 [INFO][4171] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.498 [INFO][4171] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.498 [INFO][4171] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.499 [INFO][4171] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7 Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.504 [INFO][4171] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.511 [INFO][4171] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.65/26] block=192.168.124.64/26 handle="k8s-pod-network.12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.511 [INFO][4171] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.65/26] handle="k8s-pod-network.12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.511 [INFO][4171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:56.559933 containerd[1594]: 2025-01-17 12:22:56.511 [INFO][4171] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.65/26] IPv6=[] ContainerID="12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" HandleID="k8s-pod-network.12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:22:56.562362 containerd[1594]: 2025-01-17 12:22:56.514 [INFO][4148] cni-plugin/k8s.go 386: Populated endpoint ContainerID="12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" Namespace="kube-system" Pod="coredns-76f75df574-bz4n4" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4dc87eb0-298c-4584-bb03-0f123c703b75", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-bz4n4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fc3c383293", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:56.562362 containerd[1594]: 2025-01-17 12:22:56.514 [INFO][4148] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.65/32] ContainerID="12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" Namespace="kube-system" Pod="coredns-76f75df574-bz4n4" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:22:56.562362 containerd[1594]: 2025-01-17 12:22:56.514 [INFO][4148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fc3c383293 ContainerID="12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" Namespace="kube-system" Pod="coredns-76f75df574-bz4n4" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:22:56.562362 containerd[1594]: 2025-01-17 12:22:56.526 [INFO][4148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" Namespace="kube-system" Pod="coredns-76f75df574-bz4n4" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:22:56.562362 containerd[1594]: 2025-01-17 12:22:56.527 [INFO][4148] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" Namespace="kube-system" Pod="coredns-76f75df574-bz4n4" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4dc87eb0-298c-4584-bb03-0f123c703b75", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7", Pod:"coredns-76f75df574-bz4n4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fc3c383293", MAC:"da:bd:f1:41:d5:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:56.562362 containerd[1594]: 2025-01-17 12:22:56.552 [INFO][4148] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7" Namespace="kube-system" Pod="coredns-76f75df574-bz4n4" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:22:56.589520 systemd-networkd[1217]: caliac282a76c30: Link UP Jan 17 12:22:56.590913 systemd-networkd[1217]: caliac282a76c30: Gained carrier Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.415 [INFO][4158] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0 coredns-76f75df574- kube-system ae2696f0-689c-48e8-ae28-02108fb8bde9 748 0 2025-01-17 12:22:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal coredns-76f75df574-ksx62 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliac282a76c30 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" Namespace="kube-system" Pod="coredns-76f75df574-ksx62" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.415 [INFO][4158] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" Namespace="kube-system" Pod="coredns-76f75df574-ksx62" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.470 [INFO][4175] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" HandleID="k8s-pod-network.aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.484 [INFO][4175] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" HandleID="k8s-pod-network.aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc580), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", "pod":"coredns-76f75df574-ksx62", "timestamp":"2025-01-17 12:22:56.470962124 +0000 UTC"}, Hostname:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.484 [INFO][4175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.511 [INFO][4175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.511 [INFO][4175] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal' Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.514 [INFO][4175] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.522 [INFO][4175] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.532 [INFO][4175] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.536 [INFO][4175] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.552 [INFO][4175] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.552 [INFO][4175] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.554 [INFO][4175] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.566 [INFO][4175] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.579 [INFO][4175] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.66/26] block=192.168.124.64/26 handle="k8s-pod-network.aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.579 [INFO][4175] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.66/26] handle="k8s-pod-network.aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.579 [INFO][4175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:56.619109 containerd[1594]: 2025-01-17 12:22:56.580 [INFO][4175] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.66/26] IPv6=[] ContainerID="aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" HandleID="k8s-pod-network.aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:22:56.621480 containerd[1594]: 2025-01-17 12:22:56.585 [INFO][4158] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" Namespace="kube-system" Pod="coredns-76f75df574-ksx62" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ae2696f0-689c-48e8-ae28-02108fb8bde9", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-ksx62", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac282a76c30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:56.621480 containerd[1594]: 2025-01-17 12:22:56.585 [INFO][4158] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.66/32] ContainerID="aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" Namespace="kube-system" Pod="coredns-76f75df574-ksx62" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:22:56.621480 containerd[1594]: 2025-01-17 12:22:56.586 [INFO][4158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac282a76c30 ContainerID="aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" Namespace="kube-system" Pod="coredns-76f75df574-ksx62" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:22:56.621480 containerd[1594]: 2025-01-17 12:22:56.591 [INFO][4158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" Namespace="kube-system" Pod="coredns-76f75df574-ksx62" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:22:56.621480 containerd[1594]: 2025-01-17 12:22:56.591 [INFO][4158] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" Namespace="kube-system" Pod="coredns-76f75df574-ksx62" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ae2696f0-689c-48e8-ae28-02108fb8bde9", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa", Pod:"coredns-76f75df574-ksx62", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac282a76c30", MAC:"4e:dd:33:43:65:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:56.621480 containerd[1594]: 2025-01-17 12:22:56.609 [INFO][4158] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa" Namespace="kube-system" Pod="coredns-76f75df574-ksx62" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:22:56.650590 containerd[1594]: time="2025-01-17T12:22:56.650104898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:56.650590 containerd[1594]: time="2025-01-17T12:22:56.650191872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:56.650590 containerd[1594]: time="2025-01-17T12:22:56.650219713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:56.650590 containerd[1594]: time="2025-01-17T12:22:56.650373502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:56.678421 containerd[1594]: time="2025-01-17T12:22:56.677834093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:56.678421 containerd[1594]: time="2025-01-17T12:22:56.677927862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:56.678421 containerd[1594]: time="2025-01-17T12:22:56.677955826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:56.678421 containerd[1594]: time="2025-01-17T12:22:56.678099742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:56.769537 containerd[1594]: time="2025-01-17T12:22:56.769379034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bz4n4,Uid:4dc87eb0-298c-4584-bb03-0f123c703b75,Namespace:kube-system,Attempt:1,} returns sandbox id \"12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7\"" Jan 17 12:22:56.775546 containerd[1594]: time="2025-01-17T12:22:56.775294470Z" level=info msg="CreateContainer within sandbox \"12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:22:56.789911 containerd[1594]: time="2025-01-17T12:22:56.789711798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ksx62,Uid:ae2696f0-689c-48e8-ae28-02108fb8bde9,Namespace:kube-system,Attempt:1,} returns sandbox id \"aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa\"" Jan 17 12:22:56.795452 containerd[1594]: time="2025-01-17T12:22:56.795163661Z" level=info msg="CreateContainer within sandbox \"aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:22:56.797898 containerd[1594]: time="2025-01-17T12:22:56.797855018Z" level=info msg="CreateContainer within sandbox \"12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e831ccb8589f10c444009d906e36bba0c7aaffd414f319d64f7e47647a142392\"" Jan 17 12:22:56.812262 containerd[1594]: time="2025-01-17T12:22:56.812158345Z" level=info msg="StartContainer for \"e831ccb8589f10c444009d906e36bba0c7aaffd414f319d64f7e47647a142392\"" Jan 17 12:22:56.817361 containerd[1594]: time="2025-01-17T12:22:56.817198144Z" level=info msg="CreateContainer within sandbox \"aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef5af60cca3eb8eaedfb85d0316108fb8fd073256534018a9f29b5b9e93ecbef\"" Jan 17 12:22:56.821463 containerd[1594]: time="2025-01-17T12:22:56.818564874Z" level=info msg="StartContainer for \"ef5af60cca3eb8eaedfb85d0316108fb8fd073256534018a9f29b5b9e93ecbef\"" Jan 17 12:22:56.914735 containerd[1594]: time="2025-01-17T12:22:56.914682372Z" level=info msg="StartContainer for \"e831ccb8589f10c444009d906e36bba0c7aaffd414f319d64f7e47647a142392\" returns successfully" Jan 17 12:22:56.926745 containerd[1594]: time="2025-01-17T12:22:56.925429396Z" level=info msg="StartContainer for \"ef5af60cca3eb8eaedfb85d0316108fb8fd073256534018a9f29b5b9e93ecbef\" returns successfully" Jan 17 12:22:57.087859 containerd[1594]: time="2025-01-17T12:22:57.086827267Z" level=info msg="StopPodSandbox for \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\"" Jan 17 12:22:57.087859 containerd[1594]: time="2025-01-17T12:22:57.087428195Z" level=info msg="StopPodSandbox for \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\"" Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.190 [INFO][4392] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.190 [INFO][4392] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" iface="eth0" netns="/var/run/netns/cni-8259b800-9f43-f5a0-a0fe-c072f3782b6d" Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.191 [INFO][4392] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" iface="eth0" netns="/var/run/netns/cni-8259b800-9f43-f5a0-a0fe-c072f3782b6d" Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.193 [INFO][4392] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" iface="eth0" netns="/var/run/netns/cni-8259b800-9f43-f5a0-a0fe-c072f3782b6d" Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.193 [INFO][4392] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.193 [INFO][4392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.239 [INFO][4407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" HandleID="k8s-pod-network.55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.239 [INFO][4407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.239 [INFO][4407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.249 [WARNING][4407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" HandleID="k8s-pod-network.55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.249 [INFO][4407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" HandleID="k8s-pod-network.55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.251 [INFO][4407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:57.254736 containerd[1594]: 2025-01-17 12:22:57.253 [INFO][4392] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:22:57.258953 containerd[1594]: time="2025-01-17T12:22:57.257985112Z" level=info msg="TearDown network for sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\" successfully" Jan 17 12:22:57.258953 containerd[1594]: time="2025-01-17T12:22:57.258050473Z" level=info msg="StopPodSandbox for \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\" returns successfully" Jan 17 12:22:57.265450 containerd[1594]: time="2025-01-17T12:22:57.264414022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5559488d8d-gh4cl,Uid:c51740fd-ec40-41a1-a974-57291e05645a,Namespace:calico-system,Attempt:1,}" Jan 17 12:22:57.277679 systemd[1]: run-netns-cni\x2d8259b800\x2d9f43\x2df5a0\x2da0fe\x2dc072f3782b6d.mount: Deactivated successfully. Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.177 [INFO][4385] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.178 [INFO][4385] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" iface="eth0" netns="/var/run/netns/cni-99c3c917-36b7-e312-4951-66d6963c7235" Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.179 [INFO][4385] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" iface="eth0" netns="/var/run/netns/cni-99c3c917-36b7-e312-4951-66d6963c7235" Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.179 [INFO][4385] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" iface="eth0" netns="/var/run/netns/cni-99c3c917-36b7-e312-4951-66d6963c7235" Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.180 [INFO][4385] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.180 [INFO][4385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.245 [INFO][4403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" HandleID="k8s-pod-network.29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.245 [INFO][4403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.252 [INFO][4403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.274 [WARNING][4403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" HandleID="k8s-pod-network.29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.274 [INFO][4403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" HandleID="k8s-pod-network.29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.278 [INFO][4403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:57.283212 containerd[1594]: 2025-01-17 12:22:57.280 [INFO][4385] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:22:57.285529 containerd[1594]: time="2025-01-17T12:22:57.283498363Z" level=info msg="TearDown network for sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\" successfully" Jan 17 12:22:57.285529 containerd[1594]: time="2025-01-17T12:22:57.283538371Z" level=info msg="StopPodSandbox for \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\" returns successfully" Jan 17 12:22:57.285529 containerd[1594]: time="2025-01-17T12:22:57.284703431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597ff87f5d-kz498,Uid:ba8556b8-0ba5-4be1-a609-db203f9464c8,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:22:57.294072 systemd[1]: run-netns-cni\x2d99c3c917\x2d36b7\x2de312\x2d4951\x2d66d6963c7235.mount: Deactivated successfully. Jan 17 12:22:57.421399 kubelet[2801]: I0117 12:22:57.421262 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ksx62" podStartSLOduration=37.421201854 podStartE2EDuration="37.421201854s" podCreationTimestamp="2025-01-17 12:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:57.417019246 +0000 UTC m=+51.516153243" watchObservedRunningTime="2025-01-17 12:22:57.421201854 +0000 UTC m=+51.520335853" Jan 17 12:22:57.450321 kubelet[2801]: I0117 12:22:57.449186 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bz4n4" podStartSLOduration=37.449129898 podStartE2EDuration="37.449129898s" podCreationTimestamp="2025-01-17 12:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:57.444924103 +0000 UTC m=+51.544058101" watchObservedRunningTime="2025-01-17 12:22:57.449129898 +0000 UTC m=+51.548263895" Jan 17 12:22:57.680026 systemd-networkd[1217]: califdee97c8ded: Link UP Jan 17 12:22:57.684623 systemd-networkd[1217]: califdee97c8ded: Gained carrier Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.408 [INFO][4416] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0 calico-kube-controllers-5559488d8d- calico-system c51740fd-ec40-41a1-a974-57291e05645a 766 0 2025-01-17 12:22:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5559488d8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal calico-kube-controllers-5559488d8d-gh4cl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califdee97c8ded [] []}} ContainerID="c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" Namespace="calico-system" Pod="calico-kube-controllers-5559488d8d-gh4cl" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.409 [INFO][4416] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" Namespace="calico-system" Pod="calico-kube-controllers-5559488d8d-gh4cl" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.527 [INFO][4441] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" HandleID="k8s-pod-network.c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.562 [INFO][4441] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" HandleID="k8s-pod-network.c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000317a40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", "pod":"calico-kube-controllers-5559488d8d-gh4cl", "timestamp":"2025-01-17 12:22:57.527873715 +0000 UTC"}, Hostname:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.563 [INFO][4441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.563 [INFO][4441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.563 [INFO][4441] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal' Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.568 [INFO][4441] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.590 [INFO][4441] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.608 [INFO][4441] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.612 [INFO][4441] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.625 [INFO][4441] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.626 [INFO][4441] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.630 [INFO][4441] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.639 [INFO][4441] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.664 [INFO][4441] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.67/26] block=192.168.124.64/26 handle="k8s-pod-network.c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.664 [INFO][4441] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.67/26] handle="k8s-pod-network.c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.664 [INFO][4441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:57.721257 containerd[1594]: 2025-01-17 12:22:57.664 [INFO][4441] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.67/26] IPv6=[] ContainerID="c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" HandleID="k8s-pod-network.c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:22:57.722454 containerd[1594]: 2025-01-17 12:22:57.669 [INFO][4416] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" Namespace="calico-system" Pod="calico-kube-controllers-5559488d8d-gh4cl" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0", GenerateName:"calico-kube-controllers-5559488d8d-", Namespace:"calico-system", SelfLink:"", UID:"c51740fd-ec40-41a1-a974-57291e05645a", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5559488d8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-5559488d8d-gh4cl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califdee97c8ded", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:57.722454 containerd[1594]: 2025-01-17 12:22:57.671 [INFO][4416] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.67/32] ContainerID="c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" Namespace="calico-system" Pod="calico-kube-controllers-5559488d8d-gh4cl" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:22:57.722454 containerd[1594]: 2025-01-17 12:22:57.671 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califdee97c8ded ContainerID="c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" Namespace="calico-system" Pod="calico-kube-controllers-5559488d8d-gh4cl" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:22:57.722454 containerd[1594]: 2025-01-17 12:22:57.679 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" Namespace="calico-system" Pod="calico-kube-controllers-5559488d8d-gh4cl" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:22:57.722454 containerd[1594]: 2025-01-17 12:22:57.680 [INFO][4416] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" Namespace="calico-system" Pod="calico-kube-controllers-5559488d8d-gh4cl" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0", GenerateName:"calico-kube-controllers-5559488d8d-", Namespace:"calico-system", SelfLink:"", UID:"c51740fd-ec40-41a1-a974-57291e05645a", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5559488d8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f", Pod:"calico-kube-controllers-5559488d8d-gh4cl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califdee97c8ded", MAC:"3a:18:3f:7f:e1:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:57.722454 containerd[1594]: 2025-01-17 12:22:57.703 [INFO][4416] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f" Namespace="calico-system" Pod="calico-kube-controllers-5559488d8d-gh4cl" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:22:57.781517 systemd-networkd[1217]: cali1e04ab2514b: Link UP Jan 17 12:22:57.787487 systemd-networkd[1217]: cali1e04ab2514b: Gained carrier Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.488 [INFO][4425] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0 calico-apiserver-597ff87f5d- calico-apiserver ba8556b8-0ba5-4be1-a609-db203f9464c8 765 0 2025-01-17 12:22:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:597ff87f5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal calico-apiserver-597ff87f5d-kz498 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e04ab2514b [] []}} ContainerID="c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-kz498" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.488 [INFO][4425] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-kz498" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.584 [INFO][4446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" HandleID="k8s-pod-network.c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.619 [INFO][4446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" HandleID="k8s-pod-network.c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", "pod":"calico-apiserver-597ff87f5d-kz498", "timestamp":"2025-01-17 12:22:57.584702253 +0000 UTC"}, Hostname:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.619 [INFO][4446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.664 [INFO][4446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.664 [INFO][4446] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal' Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.667 [INFO][4446] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.688 [INFO][4446] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.717 [INFO][4446] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.725 [INFO][4446] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.732 [INFO][4446] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.733 [INFO][4446] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.737 [INFO][4446] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.750 [INFO][4446] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.763 [INFO][4446] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.68/26] block=192.168.124.64/26 handle="k8s-pod-network.c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.763 [INFO][4446] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.68/26] handle="k8s-pod-network.c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.763 [INFO][4446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:57.819999 containerd[1594]: 2025-01-17 12:22:57.764 [INFO][4446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.68/26] IPv6=[] ContainerID="c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" HandleID="k8s-pod-network.c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:22:57.822114 containerd[1594]: 2025-01-17 12:22:57.766 [INFO][4425] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-kz498" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0", GenerateName:"calico-apiserver-597ff87f5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba8556b8-0ba5-4be1-a609-db203f9464c8", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597ff87f5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-597ff87f5d-kz498", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e04ab2514b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:57.822114 containerd[1594]: 2025-01-17 12:22:57.767 [INFO][4425] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.68/32] ContainerID="c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-kz498" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:22:57.822114 containerd[1594]: 2025-01-17 12:22:57.767 [INFO][4425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e04ab2514b ContainerID="c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-kz498" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:22:57.822114 containerd[1594]: 2025-01-17 12:22:57.788 [INFO][4425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-kz498" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:22:57.822114 containerd[1594]: 2025-01-17 12:22:57.794 [INFO][4425] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-kz498" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0", GenerateName:"calico-apiserver-597ff87f5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba8556b8-0ba5-4be1-a609-db203f9464c8", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597ff87f5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a", Pod:"calico-apiserver-597ff87f5d-kz498", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e04ab2514b", MAC:"2e:0c:21:72:ac:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:57.822114 containerd[1594]: 2025-01-17 12:22:57.814 [INFO][4425] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-kz498" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:22:57.827356 containerd[1594]: time="2025-01-17T12:22:57.826208075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:57.827356 containerd[1594]: time="2025-01-17T12:22:57.826298708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:57.827356 containerd[1594]: time="2025-01-17T12:22:57.826322017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:57.827356 containerd[1594]: time="2025-01-17T12:22:57.826452919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:57.875648 containerd[1594]: time="2025-01-17T12:22:57.875415273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:57.875648 containerd[1594]: time="2025-01-17T12:22:57.875577420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:57.876645 containerd[1594]: time="2025-01-17T12:22:57.875616675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:57.877466 containerd[1594]: time="2025-01-17T12:22:57.877328288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:57.991988 systemd-networkd[1217]: caliac282a76c30: Gained IPv6LL Jan 17 12:22:57.999803 containerd[1594]: time="2025-01-17T12:22:57.999734804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5559488d8d-gh4cl,Uid:c51740fd-ec40-41a1-a974-57291e05645a,Namespace:calico-system,Attempt:1,} returns sandbox id \"c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f\"" Jan 17 12:22:58.003647 containerd[1594]: time="2025-01-17T12:22:58.003605019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:22:58.006642 containerd[1594]: time="2025-01-17T12:22:58.006578240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597ff87f5d-kz498,Uid:ba8556b8-0ba5-4be1-a609-db203f9464c8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a\"" Jan 17 12:22:58.439976 systemd-networkd[1217]: cali3fc3c383293: Gained IPv6LL Jan 17 12:22:59.081297 systemd-networkd[1217]: cali1e04ab2514b: Gained IPv6LL Jan 17 12:22:59.087505 containerd[1594]: time="2025-01-17T12:22:59.087295880Z" level=info msg="StopPodSandbox for \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\"" Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.192 [INFO][4588] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.195 [INFO][4588] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" iface="eth0" netns="/var/run/netns/cni-5d980b3f-8c99-0c91-0474-7b2b03b54608" Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.196 [INFO][4588] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" iface="eth0" netns="/var/run/netns/cni-5d980b3f-8c99-0c91-0474-7b2b03b54608" Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.198 [INFO][4588] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" iface="eth0" netns="/var/run/netns/cni-5d980b3f-8c99-0c91-0474-7b2b03b54608" Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.198 [INFO][4588] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.198 [INFO][4588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.258 [INFO][4595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" HandleID="k8s-pod-network.955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.258 [INFO][4595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.258 [INFO][4595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.270 [WARNING][4595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" HandleID="k8s-pod-network.955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.270 [INFO][4595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" HandleID="k8s-pod-network.955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.272 [INFO][4595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:59.278086 containerd[1594]: 2025-01-17 12:22:59.275 [INFO][4588] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:22:59.280317 containerd[1594]: time="2025-01-17T12:22:59.279910815Z" level=info msg="TearDown network for sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\" successfully" Jan 17 12:22:59.280317 containerd[1594]: time="2025-01-17T12:22:59.279963520Z" level=info msg="StopPodSandbox for \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\" returns successfully" Jan 17 12:22:59.285620 containerd[1594]: time="2025-01-17T12:22:59.285294573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597ff87f5d-pnrsk,Uid:146c2e68-8348-4ef2-ad46-1657816350fb,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:22:59.286765 systemd[1]: run-netns-cni\x2d5d980b3f\x2d8c99\x2d0c91\x2d0474\x2d7b2b03b54608.mount: Deactivated successfully. Jan 17 12:22:59.400657 systemd-networkd[1217]: califdee97c8ded: Gained IPv6LL Jan 17 12:22:59.579910 systemd-networkd[1217]: calia798c631633: Link UP Jan 17 12:22:59.582879 systemd-networkd[1217]: calia798c631633: Gained carrier Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.410 [INFO][4601] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0 calico-apiserver-597ff87f5d- calico-apiserver 146c2e68-8348-4ef2-ad46-1657816350fb 792 0 2025-01-17 12:22:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:597ff87f5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal calico-apiserver-597ff87f5d-pnrsk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia798c631633 [] []}} ContainerID="73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-pnrsk" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.410 [INFO][4601] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-pnrsk" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.475 [INFO][4613] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" HandleID="k8s-pod-network.73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.496 [INFO][4613] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" HandleID="k8s-pod-network.73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048fd40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", "pod":"calico-apiserver-597ff87f5d-pnrsk", "timestamp":"2025-01-17 12:22:59.475587931 +0000 UTC"}, Hostname:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.497 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.497 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.497 [INFO][4613] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal' Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.500 [INFO][4613] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.506 [INFO][4613] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.514 [INFO][4613] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.519 [INFO][4613] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.527 [INFO][4613] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.527 [INFO][4613] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.530 [INFO][4613] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89 Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.541 [INFO][4613] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.555 [INFO][4613] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.69/26] block=192.168.124.64/26 handle="k8s-pod-network.73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.556 [INFO][4613] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.69/26] handle="k8s-pod-network.73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.556 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:59.619407 containerd[1594]: 2025-01-17 12:22:59.556 [INFO][4613] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.69/26] IPv6=[] ContainerID="73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" HandleID="k8s-pod-network.73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:22:59.624092 containerd[1594]: 2025-01-17 12:22:59.568 [INFO][4601] cni-plugin/k8s.go 386: Populated endpoint ContainerID="73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-pnrsk" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0", GenerateName:"calico-apiserver-597ff87f5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"146c2e68-8348-4ef2-ad46-1657816350fb", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597ff87f5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-597ff87f5d-pnrsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia798c631633", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:59.624092 containerd[1594]: 2025-01-17 12:22:59.569 [INFO][4601] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.69/32] ContainerID="73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-pnrsk" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:22:59.624092 containerd[1594]: 2025-01-17 12:22:59.569 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia798c631633 ContainerID="73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-pnrsk" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:22:59.624092 containerd[1594]: 2025-01-17 12:22:59.583 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-pnrsk" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:22:59.624092 containerd[1594]: 2025-01-17 12:22:59.587 [INFO][4601] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-pnrsk" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0", GenerateName:"calico-apiserver-597ff87f5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"146c2e68-8348-4ef2-ad46-1657816350fb", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597ff87f5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89", Pod:"calico-apiserver-597ff87f5d-pnrsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia798c631633", MAC:"16:7e:11:82:64:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:59.624092 containerd[1594]: 2025-01-17 12:22:59.612 [INFO][4601] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89" Namespace="calico-apiserver" Pod="calico-apiserver-597ff87f5d-pnrsk" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:22:59.681952 containerd[1594]: time="2025-01-17T12:22:59.680736916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:59.681952 containerd[1594]: time="2025-01-17T12:22:59.681451437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:59.681952 containerd[1594]: time="2025-01-17T12:22:59.681499954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:59.682326 containerd[1594]: time="2025-01-17T12:22:59.681961587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:59.791263 kubelet[2801]: I0117 12:22:59.791222 2801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:22:59.817214 containerd[1594]: time="2025-01-17T12:22:59.814606579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597ff87f5d-pnrsk,Uid:146c2e68-8348-4ef2-ad46-1657816350fb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89\"" Jan 17 12:23:00.287516 systemd[1]: run-containerd-runc-k8s.io-73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89-runc.82Fbs5.mount: Deactivated successfully. Jan 17 12:23:00.337894 containerd[1594]: time="2025-01-17T12:23:00.337802329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:00.339372 containerd[1594]: time="2025-01-17T12:23:00.339294385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:23:00.340794 containerd[1594]: time="2025-01-17T12:23:00.340710314Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:00.349797 containerd[1594]: time="2025-01-17T12:23:00.348581340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:00.349967 containerd[1594]: time="2025-01-17T12:23:00.349930805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.345884076s" Jan 17 12:23:00.350027 containerd[1594]: time="2025-01-17T12:23:00.349981486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:23:00.352311 containerd[1594]: time="2025-01-17T12:23:00.352276657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:23:00.377385 containerd[1594]: time="2025-01-17T12:23:00.377218429Z" level=info msg="CreateContainer within sandbox \"c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:23:00.402982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130553860.mount: Deactivated successfully. Jan 17 12:23:00.403201 containerd[1594]: time="2025-01-17T12:23:00.403162081Z" level=info msg="CreateContainer within sandbox \"c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9a20c2957cea00426a00185be247cb7d23d2fce5ce6bc92ad03f55b2ad08743e\"" Jan 17 12:23:00.405379 containerd[1594]: time="2025-01-17T12:23:00.404216573Z" level=info msg="StartContainer for \"9a20c2957cea00426a00185be247cb7d23d2fce5ce6bc92ad03f55b2ad08743e\"" Jan 17 12:23:00.505487 containerd[1594]: time="2025-01-17T12:23:00.505433949Z" level=info msg="StartContainer for \"9a20c2957cea00426a00185be247cb7d23d2fce5ce6bc92ad03f55b2ad08743e\" returns successfully" Jan 17 12:23:00.936002 systemd-networkd[1217]: calia798c631633: Gained IPv6LL Jan 17 12:23:01.087061 containerd[1594]: time="2025-01-17T12:23:01.086917273Z" level=info msg="StopPodSandbox for \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\"" Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.164 [INFO][4781] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.164 [INFO][4781] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" iface="eth0" netns="/var/run/netns/cni-9d18bf77-56bd-6a6b-bc1d-7f13a84f8b9e" Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.165 [INFO][4781] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" iface="eth0" netns="/var/run/netns/cni-9d18bf77-56bd-6a6b-bc1d-7f13a84f8b9e" Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.166 [INFO][4781] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" iface="eth0" netns="/var/run/netns/cni-9d18bf77-56bd-6a6b-bc1d-7f13a84f8b9e" Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.166 [INFO][4781] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.166 [INFO][4781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.211 [INFO][4787] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" HandleID="k8s-pod-network.146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.213 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.213 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.230 [WARNING][4787] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" HandleID="k8s-pod-network.146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.231 [INFO][4787] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" HandleID="k8s-pod-network.146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.232 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:01.238079 containerd[1594]: 2025-01-17 12:23:01.235 [INFO][4781] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:01.238079 containerd[1594]: time="2025-01-17T12:23:01.237545523Z" level=info msg="TearDown network for sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\" successfully" Jan 17 12:23:01.238079 containerd[1594]: time="2025-01-17T12:23:01.237583923Z" level=info msg="StopPodSandbox for \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\" returns successfully" Jan 17 12:23:01.245794 containerd[1594]: time="2025-01-17T12:23:01.244677288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-btsnv,Uid:d681af9d-6a3e-41bc-9243-7f519ac5c8d3,Namespace:calico-system,Attempt:1,}" Jan 17 12:23:01.292838 systemd[1]: run-netns-cni\x2d9d18bf77\x2d56bd\x2d6a6b\x2dbc1d\x2d7f13a84f8b9e.mount: Deactivated successfully. Jan 17 12:23:01.469318 kubelet[2801]: I0117 12:23:01.467971 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5559488d8d-gh4cl" podStartSLOduration=26.118699752 podStartE2EDuration="28.467909742s" podCreationTimestamp="2025-01-17 12:22:33 +0000 UTC" firstStartedPulling="2025-01-17 12:22:58.002325801 +0000 UTC m=+52.101459789" lastFinishedPulling="2025-01-17 12:23:00.351535788 +0000 UTC m=+54.450669779" observedRunningTime="2025-01-17 12:23:01.467101983 +0000 UTC m=+55.566235983" watchObservedRunningTime="2025-01-17 12:23:01.467909742 +0000 UTC m=+55.567044051" Jan 17 12:23:01.531545 systemd-networkd[1217]: cali26673c9c198: Link UP Jan 17 12:23:01.535371 systemd[1]: run-containerd-runc-k8s.io-9a20c2957cea00426a00185be247cb7d23d2fce5ce6bc92ad03f55b2ad08743e-runc.wlqIpg.mount: Deactivated successfully. Jan 17 12:23:01.549513 systemd-networkd[1217]: cali26673c9c198: Gained carrier Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.354 [INFO][4793] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0 csi-node-driver- calico-system d681af9d-6a3e-41bc-9243-7f519ac5c8d3 808 0 2025-01-17 12:22:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal csi-node-driver-btsnv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali26673c9c198 [] []}} ContainerID="93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" Namespace="calico-system" Pod="csi-node-driver-btsnv" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.354 [INFO][4793] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" Namespace="calico-system" Pod="csi-node-driver-btsnv" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.411 [INFO][4804] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" HandleID="k8s-pod-network.93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.427 [INFO][4804] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" HandleID="k8s-pod-network.93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bb340), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", "pod":"csi-node-driver-btsnv", "timestamp":"2025-01-17 12:23:01.411862739 +0000 UTC"}, Hostname:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.428 [INFO][4804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.428 [INFO][4804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.428 [INFO][4804] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal' Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.430 [INFO][4804] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.439 [INFO][4804] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.453 [INFO][4804] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.459 [INFO][4804] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.470 [INFO][4804] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.470 [INFO][4804] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.476 [INFO][4804] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98 Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.483 [INFO][4804] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.500 [INFO][4804] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.70/26] block=192.168.124.64/26 handle="k8s-pod-network.93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.500 [INFO][4804] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.70/26] handle="k8s-pod-network.93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" host="ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal" Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.500 [INFO][4804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:01.580400 containerd[1594]: 2025-01-17 12:23:01.500 [INFO][4804] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.70/26] IPv6=[] ContainerID="93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" HandleID="k8s-pod-network.93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:01.583546 containerd[1594]: 2025-01-17 12:23:01.504 [INFO][4793] cni-plugin/k8s.go 386: Populated endpoint ContainerID="93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" Namespace="calico-system" Pod="csi-node-driver-btsnv" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d681af9d-6a3e-41bc-9243-7f519ac5c8d3", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-btsnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26673c9c198", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:01.583546 containerd[1594]: 2025-01-17 12:23:01.504 [INFO][4793] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.70/32] ContainerID="93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" Namespace="calico-system" Pod="csi-node-driver-btsnv" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:01.583546 containerd[1594]: 2025-01-17 12:23:01.505 [INFO][4793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26673c9c198 ContainerID="93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" Namespace="calico-system" Pod="csi-node-driver-btsnv" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:01.583546 containerd[1594]: 2025-01-17 12:23:01.551 [INFO][4793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" Namespace="calico-system" Pod="csi-node-driver-btsnv" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:01.583546 containerd[1594]: 2025-01-17 12:23:01.552 [INFO][4793] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" Namespace="calico-system" Pod="csi-node-driver-btsnv" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d681af9d-6a3e-41bc-9243-7f519ac5c8d3", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98", Pod:"csi-node-driver-btsnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26673c9c198", MAC:"4a:a2:34:8b:2f:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:01.583546 containerd[1594]: 2025-01-17 12:23:01.575 [INFO][4793] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98" Namespace="calico-system" Pod="csi-node-driver-btsnv" WorkloadEndpoint="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:01.727643 containerd[1594]: time="2025-01-17T12:23:01.727192688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:01.727643 containerd[1594]: time="2025-01-17T12:23:01.727277069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:01.727643 containerd[1594]: time="2025-01-17T12:23:01.727303530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:01.727643 containerd[1594]: time="2025-01-17T12:23:01.727446324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:01.824088 containerd[1594]: time="2025-01-17T12:23:01.824038239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-btsnv,Uid:d681af9d-6a3e-41bc-9243-7f519ac5c8d3,Namespace:calico-system,Attempt:1,} returns sandbox id \"93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98\"" Jan 17 12:23:02.794365 containerd[1594]: time="2025-01-17T12:23:02.794287654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:02.796063 containerd[1594]: time="2025-01-17T12:23:02.795964128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:23:02.797740 containerd[1594]: time="2025-01-17T12:23:02.797658737Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:02.801517 containerd[1594]: time="2025-01-17T12:23:02.801426123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:02.803038 containerd[1594]: time="2025-01-17T12:23:02.802523768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.449605979s" Jan 17 12:23:02.803038 containerd[1594]: time="2025-01-17T12:23:02.802576478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:23:02.805135 containerd[1594]: time="2025-01-17T12:23:02.803706063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:23:02.806577 containerd[1594]: time="2025-01-17T12:23:02.806507961Z" level=info msg="CreateContainer within sandbox \"c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:23:02.832588 containerd[1594]: time="2025-01-17T12:23:02.832518764Z" level=info msg="CreateContainer within sandbox \"c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d8010d070002b54461d382bdb522c7cc2642239706219fd49eda0e7a792b3036\"" Jan 17 12:23:02.837098 containerd[1594]: time="2025-01-17T12:23:02.835337543Z" level=info msg="StartContainer for \"d8010d070002b54461d382bdb522c7cc2642239706219fd49eda0e7a792b3036\"" Jan 17 12:23:02.836895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343549485.mount: Deactivated successfully. Jan 17 12:23:02.920960 systemd-networkd[1217]: cali26673c9c198: Gained IPv6LL Jan 17 12:23:02.953728 containerd[1594]: time="2025-01-17T12:23:02.953637241Z" level=info msg="StartContainer for \"d8010d070002b54461d382bdb522c7cc2642239706219fd49eda0e7a792b3036\" returns successfully" Jan 17 12:23:03.040231 containerd[1594]: time="2025-01-17T12:23:03.038995339Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:03.041386 containerd[1594]: time="2025-01-17T12:23:03.041332196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:23:03.046559 containerd[1594]: time="2025-01-17T12:23:03.045683339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 241.925851ms" Jan 17 12:23:03.046999 containerd[1594]: time="2025-01-17T12:23:03.046966779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:23:03.048117 containerd[1594]: time="2025-01-17T12:23:03.048075407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:23:03.054520 containerd[1594]: time="2025-01-17T12:23:03.054474533Z" level=info msg="CreateContainer within sandbox \"73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:23:03.076237 containerd[1594]: time="2025-01-17T12:23:03.076185936Z" level=info msg="CreateContainer within sandbox \"73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6928b27c46834dcbdda9da7b6517428cf12a40fd72e1bfb7263668818c463f5d\"" Jan 17 12:23:03.079223 containerd[1594]: time="2025-01-17T12:23:03.077715357Z" level=info msg="StartContainer for \"6928b27c46834dcbdda9da7b6517428cf12a40fd72e1bfb7263668818c463f5d\"" Jan 17 12:23:03.242001 containerd[1594]: time="2025-01-17T12:23:03.241932306Z" level=info msg="StartContainer for \"6928b27c46834dcbdda9da7b6517428cf12a40fd72e1bfb7263668818c463f5d\" returns successfully" Jan 17 12:23:03.516357 kubelet[2801]: I0117 12:23:03.516079 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-597ff87f5d-pnrsk" podStartSLOduration=28.287663925 podStartE2EDuration="31.515896148s" podCreationTimestamp="2025-01-17 12:22:32 +0000 UTC" firstStartedPulling="2025-01-17 12:22:59.819507758 +0000 UTC m=+53.918641749" lastFinishedPulling="2025-01-17 12:23:03.047739984 +0000 UTC m=+57.146873972" observedRunningTime="2025-01-17 12:23:03.515434752 +0000 UTC m=+57.614568751" watchObservedRunningTime="2025-01-17 12:23:03.515896148 +0000 UTC m=+57.615030145" Jan 17 12:23:03.518606 kubelet[2801]: I0117 12:23:03.518263 2801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-597ff87f5d-kz498" podStartSLOduration=26.723422867 podStartE2EDuration="31.518174953s" podCreationTimestamp="2025-01-17 12:22:32 +0000 UTC" firstStartedPulling="2025-01-17 12:22:58.008476508 +0000 UTC m=+52.107610496" lastFinishedPulling="2025-01-17 12:23:02.803228593 +0000 UTC m=+56.902362582" observedRunningTime="2025-01-17 12:23:03.491215291 +0000 UTC m=+57.590349288" watchObservedRunningTime="2025-01-17 12:23:03.518174953 +0000 UTC m=+57.617308951" Jan 17 12:23:03.823105 systemd[1]: Started sshd@7-10.128.0.73:22-139.178.89.65:60850.service - OpenSSH per-connection server daemon (139.178.89.65:60850). Jan 17 12:23:04.213713 sshd[4970]: Accepted publickey for core from 139.178.89.65 port 60850 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:04.217751 sshd[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:04.237865 systemd-logind[1570]: New session 8 of user core. Jan 17 12:23:04.244192 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:23:04.491127 kubelet[2801]: I0117 12:23:04.489970 2801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:04.684431 sshd[4970]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:04.701261 systemd[1]: sshd@7-10.128.0.73:22-139.178.89.65:60850.service: Deactivated successfully. Jan 17 12:23:04.716200 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:23:04.720064 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:23:04.724166 systemd-logind[1570]: Removed session 8. Jan 17 12:23:04.756989 containerd[1594]: time="2025-01-17T12:23:04.754043194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:04.759882 containerd[1594]: time="2025-01-17T12:23:04.759313112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:23:04.759882 containerd[1594]: time="2025-01-17T12:23:04.759450565Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:04.772253 containerd[1594]: time="2025-01-17T12:23:04.772193616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:04.775391 containerd[1594]: time="2025-01-17T12:23:04.773477223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.72534518s" Jan 17 12:23:04.775391 containerd[1594]: time="2025-01-17T12:23:04.773541543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:23:04.791743 containerd[1594]: time="2025-01-17T12:23:04.791198990Z" level=info msg="CreateContainer within sandbox \"93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:23:04.832626 containerd[1594]: time="2025-01-17T12:23:04.832440637Z" level=info msg="CreateContainer within sandbox \"93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"07ff00a2cd387340db728316392894aca1093682f8f410add54789f41a7a8822\"" Jan 17 12:23:04.838069 containerd[1594]: time="2025-01-17T12:23:04.837203977Z" level=info msg="StartContainer for \"07ff00a2cd387340db728316392894aca1093682f8f410add54789f41a7a8822\"" Jan 17 12:23:04.974661 systemd[1]: run-containerd-runc-k8s.io-07ff00a2cd387340db728316392894aca1093682f8f410add54789f41a7a8822-runc.UE5G6h.mount: Deactivated successfully. Jan 17 12:23:05.034489 containerd[1594]: time="2025-01-17T12:23:05.034363384Z" level=info msg="StartContainer for \"07ff00a2cd387340db728316392894aca1093682f8f410add54789f41a7a8822\" returns successfully" Jan 17 12:23:05.037588 containerd[1594]: time="2025-01-17T12:23:05.037298438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:23:05.164857 ntpd[1538]: Listen normally on 6 vxlan.calico 192.168.124.64:123 Jan 17 12:23:05.165583 ntpd[1538]: 17 Jan 12:23:05 ntpd[1538]: Listen normally on 6 vxlan.calico 192.168.124.64:123 Jan 17 12:23:05.165583 ntpd[1538]: 17 Jan 12:23:05 ntpd[1538]: Listen normally on 7 vxlan.calico [fe80::64fc:c7ff:fe61:9dae%4]:123 Jan 17 12:23:05.165583 ntpd[1538]: 17 Jan 12:23:05 ntpd[1538]: Listen normally on 8 cali3fc3c383293 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:23:05.165583 ntpd[1538]: 17 Jan 12:23:05 ntpd[1538]: Listen normally on 9 caliac282a76c30 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:23:05.165583 ntpd[1538]: 17 Jan 12:23:05 ntpd[1538]: Listen normally on 10 califdee97c8ded [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:23:05.165583 ntpd[1538]: 17 Jan 12:23:05 ntpd[1538]: Listen normally on 11 cali1e04ab2514b [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:23:05.165583 ntpd[1538]: 17 Jan 12:23:05 ntpd[1538]: Listen normally on 12 calia798c631633 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:23:05.165583 ntpd[1538]: 17 Jan 12:23:05 ntpd[1538]: Listen normally on 13 cali26673c9c198 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:23:05.164986 ntpd[1538]: Listen normally on 7 vxlan.calico [fe80::64fc:c7ff:fe61:9dae%4]:123 Jan 17 12:23:05.165068 ntpd[1538]: Listen normally on 8 cali3fc3c383293 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:23:05.165124 ntpd[1538]: Listen normally on 9 caliac282a76c30 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:23:05.165180 ntpd[1538]: Listen normally on 10 califdee97c8ded [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:23:05.165241 ntpd[1538]: Listen normally on 11 cali1e04ab2514b [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:23:05.165299 ntpd[1538]: Listen normally on 12 calia798c631633 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:23:05.165352 ntpd[1538]: Listen normally on 13 cali26673c9c198 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:23:05.498198 kubelet[2801]: I0117 12:23:05.498146 2801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:06.090799 containerd[1594]: time="2025-01-17T12:23:06.090706726Z" level=info msg="StopPodSandbox for \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\"" Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.357 [WARNING][5047] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4dc87eb0-298c-4584-bb03-0f123c703b75", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7", Pod:"coredns-76f75df574-bz4n4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fc3c383293", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.360 [INFO][5047] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.361 [INFO][5047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" iface="eth0" netns="" Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.361 [INFO][5047] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.362 [INFO][5047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.514 [INFO][5056] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" HandleID="k8s-pod-network.9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.517 [INFO][5056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.517 [INFO][5056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.530 [WARNING][5056] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" HandleID="k8s-pod-network.9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.531 [INFO][5056] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" HandleID="k8s-pod-network.9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.536 [INFO][5056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:06.550064 containerd[1594]: 2025-01-17 12:23:06.541 [INFO][5047] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:23:06.552413 containerd[1594]: time="2025-01-17T12:23:06.550476884Z" level=info msg="TearDown network for sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\" successfully" Jan 17 12:23:06.552413 containerd[1594]: time="2025-01-17T12:23:06.550529329Z" level=info msg="StopPodSandbox for \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\" returns successfully" Jan 17 12:23:06.554876 containerd[1594]: time="2025-01-17T12:23:06.554097013Z" level=info msg="RemovePodSandbox for \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\"" Jan 17 12:23:06.554876 containerd[1594]: time="2025-01-17T12:23:06.554153529Z" level=info msg="Forcibly stopping sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\"" Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.735 [WARNING][5075] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4dc87eb0-298c-4584-bb03-0f123c703b75", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"12617923a05525525b7481afbdbb7709f34848abe85d7d267bdbf2a9145f62f7", Pod:"coredns-76f75df574-bz4n4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fc3c383293", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.738 [INFO][5075] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.738 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" iface="eth0" netns="" Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.738 [INFO][5075] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.738 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.856 [INFO][5083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" HandleID="k8s-pod-network.9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.859 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.859 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.889 [WARNING][5083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" HandleID="k8s-pod-network.9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.890 [INFO][5083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" HandleID="k8s-pod-network.9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--bz4n4-eth0" Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.901 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:06.911633 containerd[1594]: 2025-01-17 12:23:06.907 [INFO][5075] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85" Jan 17 12:23:06.913497 containerd[1594]: time="2025-01-17T12:23:06.912330861Z" level=info msg="TearDown network for sandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\" successfully" Jan 17 12:23:07.029927 containerd[1594]: time="2025-01-17T12:23:07.029557194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:07.029927 containerd[1594]: time="2025-01-17T12:23:07.029668363Z" level=info msg="RemovePodSandbox \"9a1a62346f199ecb05f15925b7c83f33758d5d80594df30b8a8e153e5d836e85\" returns successfully" Jan 17 12:23:07.031723 containerd[1594]: time="2025-01-17T12:23:07.031670005Z" level=info msg="StopPodSandbox for \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\"" Jan 17 12:23:07.049272 containerd[1594]: time="2025-01-17T12:23:07.047356561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:07.051344 containerd[1594]: time="2025-01-17T12:23:07.051258023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:23:07.056638 containerd[1594]: time="2025-01-17T12:23:07.056541395Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:07.078897 containerd[1594]: time="2025-01-17T12:23:07.077940652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:07.080677 containerd[1594]: time="2025-01-17T12:23:07.080609748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.043242145s" Jan 17 12:23:07.080677 containerd[1594]: time="2025-01-17T12:23:07.080674983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:23:07.091666 containerd[1594]: time="2025-01-17T12:23:07.091587164Z" level=info msg="CreateContainer within sandbox \"93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:23:07.170734 containerd[1594]: time="2025-01-17T12:23:07.168049167Z" level=info msg="CreateContainer within sandbox \"93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"97153606279f6aad55b076ff91329ecea0b34d0f7d019eab6f0f2c41da440698\"" Jan 17 12:23:07.170734 containerd[1594]: time="2025-01-17T12:23:07.168990375Z" level=info msg="StartContainer for \"97153606279f6aad55b076ff91329ecea0b34d0f7d019eab6f0f2c41da440698\"" Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.207 [WARNING][5102] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ae2696f0-689c-48e8-ae28-02108fb8bde9", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa", Pod:"coredns-76f75df574-ksx62", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac282a76c30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.210 [INFO][5102] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.211 [INFO][5102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" iface="eth0" netns="" Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.220 [INFO][5102] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.220 [INFO][5102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.397 [INFO][5123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" HandleID="k8s-pod-network.73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.398 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.398 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.414 [WARNING][5123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" HandleID="k8s-pod-network.73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.414 [INFO][5123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" HandleID="k8s-pod-network.73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.417 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:07.427723 containerd[1594]: 2025-01-17 12:23:07.423 [INFO][5102] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:23:07.434171 containerd[1594]: time="2025-01-17T12:23:07.427758187Z" level=info msg="TearDown network for sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\" successfully" Jan 17 12:23:07.434171 containerd[1594]: time="2025-01-17T12:23:07.428016926Z" level=info msg="StopPodSandbox for \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\" returns successfully" Jan 17 12:23:07.434171 containerd[1594]: time="2025-01-17T12:23:07.430034125Z" level=info msg="RemovePodSandbox for \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\"" Jan 17 12:23:07.434171 containerd[1594]: time="2025-01-17T12:23:07.430081050Z" level=info msg="Forcibly stopping sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\"" Jan 17 12:23:07.597440 containerd[1594]: time="2025-01-17T12:23:07.597330385Z" level=info msg="StartContainer for \"97153606279f6aad55b076ff91329ecea0b34d0f7d019eab6f0f2c41da440698\" returns successfully" Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.599 [WARNING][5151] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ae2696f0-689c-48e8-ae28-02108fb8bde9", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"aeecc8a334feb5c8563c01e203d18fb222954bf641e562c3429df2e27ff0a6aa", Pod:"coredns-76f75df574-ksx62", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac282a76c30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.600 [INFO][5151] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.601 [INFO][5151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" iface="eth0" netns="" Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.601 [INFO][5151] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.601 [INFO][5151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.656 [INFO][5166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" HandleID="k8s-pod-network.73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.656 [INFO][5166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.657 [INFO][5166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.667 [WARNING][5166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" HandleID="k8s-pod-network.73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.667 [INFO][5166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" HandleID="k8s-pod-network.73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-coredns--76f75df574--ksx62-eth0" Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.671 [INFO][5166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:07.678369 containerd[1594]: 2025-01-17 12:23:07.675 [INFO][5151] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050" Jan 17 12:23:07.682508 containerd[1594]: time="2025-01-17T12:23:07.680726641Z" level=info msg="TearDown network for sandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\" successfully" Jan 17 12:23:07.689140 containerd[1594]: time="2025-01-17T12:23:07.688821360Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:07.689140 containerd[1594]: time="2025-01-17T12:23:07.688918484Z" level=info msg="RemovePodSandbox \"73db75c04d8249efeb8c8ca6207c4b13c4c57dcf2841c3b1369691fbe8070050\" returns successfully" Jan 17 12:23:07.690131 containerd[1594]: time="2025-01-17T12:23:07.689571361Z" level=info msg="StopPodSandbox for \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\"" Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.781 [WARNING][5189] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0", GenerateName:"calico-kube-controllers-5559488d8d-", Namespace:"calico-system", SelfLink:"", UID:"c51740fd-ec40-41a1-a974-57291e05645a", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5559488d8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f", Pod:"calico-kube-controllers-5559488d8d-gh4cl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califdee97c8ded", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.782 [INFO][5189] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.782 [INFO][5189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" iface="eth0" netns="" Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.782 [INFO][5189] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.782 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.829 [INFO][5196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" HandleID="k8s-pod-network.55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.829 [INFO][5196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.829 [INFO][5196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.838 [WARNING][5196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" HandleID="k8s-pod-network.55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.838 [INFO][5196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" HandleID="k8s-pod-network.55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.840 [INFO][5196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:07.843051 containerd[1594]: 2025-01-17 12:23:07.841 [INFO][5189] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:23:07.844636 containerd[1594]: time="2025-01-17T12:23:07.843125620Z" level=info msg="TearDown network for sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\" successfully" Jan 17 12:23:07.844636 containerd[1594]: time="2025-01-17T12:23:07.843191559Z" level=info msg="StopPodSandbox for \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\" returns successfully" Jan 17 12:23:07.844636 containerd[1594]: time="2025-01-17T12:23:07.844074630Z" level=info msg="RemovePodSandbox for \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\"" Jan 17 12:23:07.844636 containerd[1594]: time="2025-01-17T12:23:07.844115176Z" level=info msg="Forcibly stopping sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\"" Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.893 [WARNING][5215] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0", GenerateName:"calico-kube-controllers-5559488d8d-", Namespace:"calico-system", SelfLink:"", UID:"c51740fd-ec40-41a1-a974-57291e05645a", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5559488d8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"c5d18c69c444a5b10421d70e763dc2f46245aec0fd290323c9e139e93be2730f", Pod:"calico-kube-controllers-5559488d8d-gh4cl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califdee97c8ded", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.893 [INFO][5215] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.893 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" iface="eth0" netns="" Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.893 [INFO][5215] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.893 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.918 [INFO][5221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" HandleID="k8s-pod-network.55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.918 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.918 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.926 [WARNING][5221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" HandleID="k8s-pod-network.55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.926 [INFO][5221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" HandleID="k8s-pod-network.55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--kube--controllers--5559488d8d--gh4cl-eth0" Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.930 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:07.932900 containerd[1594]: 2025-01-17 12:23:07.931 [INFO][5215] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806" Jan 17 12:23:07.932900 containerd[1594]: time="2025-01-17T12:23:07.932871049Z" level=info msg="TearDown network for sandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\" successfully" Jan 17 12:23:07.938036 containerd[1594]: time="2025-01-17T12:23:07.937980643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:07.938320 containerd[1594]: time="2025-01-17T12:23:07.938070323Z" level=info msg="RemovePodSandbox \"55dd484248d1cfbe1e83acc0d0ec0519925a359fb2ae151ebb0fc62ace12c806\" returns successfully" Jan 17 12:23:07.939088 containerd[1594]: time="2025-01-17T12:23:07.939054261Z" level=info msg="StopPodSandbox for \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\"" Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:07.985 [WARNING][5240] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0", GenerateName:"calico-apiserver-597ff87f5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"146c2e68-8348-4ef2-ad46-1657816350fb", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597ff87f5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89", Pod:"calico-apiserver-597ff87f5d-pnrsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia798c631633", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:07.985 [INFO][5240] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:07.985 [INFO][5240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" iface="eth0" netns="" Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:07.985 [INFO][5240] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:07.985 [INFO][5240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:08.026 [INFO][5246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" HandleID="k8s-pod-network.955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:08.027 [INFO][5246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:08.027 [INFO][5246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:08.036 [WARNING][5246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" HandleID="k8s-pod-network.955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:08.036 [INFO][5246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" HandleID="k8s-pod-network.955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:08.037 [INFO][5246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:08.040940 containerd[1594]: 2025-01-17 12:23:08.039 [INFO][5240] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:23:08.041572 containerd[1594]: time="2025-01-17T12:23:08.040986818Z" level=info msg="TearDown network for sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\" successfully" Jan 17 12:23:08.041572 containerd[1594]: time="2025-01-17T12:23:08.041024544Z" level=info msg="StopPodSandbox for \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\" returns successfully" Jan 17 12:23:08.042427 containerd[1594]: time="2025-01-17T12:23:08.042355095Z" level=info msg="RemovePodSandbox for \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\"" Jan 17 12:23:08.042427 containerd[1594]: time="2025-01-17T12:23:08.042401630Z" level=info msg="Forcibly stopping sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\"" Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.095 [WARNING][5264] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0", GenerateName:"calico-apiserver-597ff87f5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"146c2e68-8348-4ef2-ad46-1657816350fb", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597ff87f5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"73efa4c0b7b4102f44eefcd23158ddd8c72486429db8074e8360898ca0f2ed89", Pod:"calico-apiserver-597ff87f5d-pnrsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia798c631633", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.095 [INFO][5264] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.095 [INFO][5264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" iface="eth0" netns="" Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.095 [INFO][5264] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.095 [INFO][5264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.134 [INFO][5270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" HandleID="k8s-pod-network.955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.135 [INFO][5270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.135 [INFO][5270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.142 [WARNING][5270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" HandleID="k8s-pod-network.955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.142 [INFO][5270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" HandleID="k8s-pod-network.955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--pnrsk-eth0" Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.143 [INFO][5270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:08.147885 containerd[1594]: 2025-01-17 12:23:08.145 [INFO][5264] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f" Jan 17 12:23:08.149430 containerd[1594]: time="2025-01-17T12:23:08.147954228Z" level=info msg="TearDown network for sandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\" successfully" Jan 17 12:23:08.154278 containerd[1594]: time="2025-01-17T12:23:08.154172573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:08.154580 containerd[1594]: time="2025-01-17T12:23:08.154321484Z" level=info msg="RemovePodSandbox \"955b55adf627a4119612d63b514e3ac73c8a93e33d3a77b36ef41c745b75c29f\" returns successfully" Jan 17 12:23:08.154580 containerd[1594]: time="2025-01-17T12:23:08.155038750Z" level=info msg="StopPodSandbox for \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\"" Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.209 [WARNING][5288] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0", GenerateName:"calico-apiserver-597ff87f5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba8556b8-0ba5-4be1-a609-db203f9464c8", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597ff87f5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a", Pod:"calico-apiserver-597ff87f5d-kz498", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e04ab2514b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.209 [INFO][5288] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.209 [INFO][5288] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" iface="eth0" netns="" Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.210 [INFO][5288] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.210 [INFO][5288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.240 [INFO][5295] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" HandleID="k8s-pod-network.29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.240 [INFO][5295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.240 [INFO][5295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.249 [WARNING][5295] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" HandleID="k8s-pod-network.29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.250 [INFO][5295] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" HandleID="k8s-pod-network.29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.252 [INFO][5295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:08.254990 containerd[1594]: 2025-01-17 12:23:08.253 [INFO][5288] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:23:08.254990 containerd[1594]: time="2025-01-17T12:23:08.254951588Z" level=info msg="TearDown network for sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\" successfully" Jan 17 12:23:08.254990 containerd[1594]: time="2025-01-17T12:23:08.254989703Z" level=info msg="StopPodSandbox for \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\" returns successfully" Jan 17 12:23:08.256372 containerd[1594]: time="2025-01-17T12:23:08.255660714Z" level=info msg="RemovePodSandbox for \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\"" Jan 17 12:23:08.256372 containerd[1594]: time="2025-01-17T12:23:08.255704443Z" level=info msg="Forcibly stopping sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\"" Jan 17 12:23:08.273426 kubelet[2801]: I0117 12:23:08.273386 2801 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:23:08.275373 kubelet[2801]: I0117 12:23:08.273488 2801 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.337 [WARNING][5313] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0", GenerateName:"calico-apiserver-597ff87f5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba8556b8-0ba5-4be1-a609-db203f9464c8", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597ff87f5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"c6ac1c6e08a4d917a06f542c7cd334ce169847fa3aba8c6509e372f71ec5fe7a", Pod:"calico-apiserver-597ff87f5d-kz498", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e04ab2514b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.337 [INFO][5313] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.337 [INFO][5313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" iface="eth0" netns="" Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.337 [INFO][5313] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.337 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.368 [INFO][5320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" HandleID="k8s-pod-network.29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.368 [INFO][5320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.369 [INFO][5320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.376 [WARNING][5320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" HandleID="k8s-pod-network.29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.376 [INFO][5320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" HandleID="k8s-pod-network.29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-calico--apiserver--597ff87f5d--kz498-eth0" Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.378 [INFO][5320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:08.381841 containerd[1594]: 2025-01-17 12:23:08.379 [INFO][5313] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6" Jan 17 12:23:08.381841 containerd[1594]: time="2025-01-17T12:23:08.381388975Z" level=info msg="TearDown network for sandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\" successfully" Jan 17 12:23:08.387022 containerd[1594]: time="2025-01-17T12:23:08.386966972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:08.387022 containerd[1594]: time="2025-01-17T12:23:08.387059482Z" level=info msg="RemovePodSandbox \"29840555d131025a2e4e92316c79dc7a786161f887df32b59be5cbcee44923c6\" returns successfully" Jan 17 12:23:08.387849 containerd[1594]: time="2025-01-17T12:23:08.387807601Z" level=info msg="StopPodSandbox for \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\"" Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.440 [WARNING][5338] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d681af9d-6a3e-41bc-9243-7f519ac5c8d3", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98", Pod:"csi-node-driver-btsnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26673c9c198", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.440 [INFO][5338] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.440 [INFO][5338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" iface="eth0" netns="" Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.440 [INFO][5338] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.440 [INFO][5338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.467 [INFO][5344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" HandleID="k8s-pod-network.146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.467 [INFO][5344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.467 [INFO][5344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.477 [WARNING][5344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" HandleID="k8s-pod-network.146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.477 [INFO][5344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" HandleID="k8s-pod-network.146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.480 [INFO][5344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:08.483256 containerd[1594]: 2025-01-17 12:23:08.481 [INFO][5338] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:08.483256 containerd[1594]: time="2025-01-17T12:23:08.483159622Z" level=info msg="TearDown network for sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\" successfully" Jan 17 12:23:08.483256 containerd[1594]: time="2025-01-17T12:23:08.483212784Z" level=info msg="StopPodSandbox for \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\" returns successfully" Jan 17 12:23:08.485379 containerd[1594]: time="2025-01-17T12:23:08.483952801Z" level=info msg="RemovePodSandbox for \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\"" Jan 17 12:23:08.485379 containerd[1594]: time="2025-01-17T12:23:08.483992765Z" level=info msg="Forcibly stopping sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\"" Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.536 [WARNING][5362] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d681af9d-6a3e-41bc-9243-7f519ac5c8d3", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-62f509289be87beba9d0.c.flatcar-212911.internal", ContainerID:"93127b837158dce5d0f1589d38e513b4421205a99a2bf23a52a5d836f1097a98", Pod:"csi-node-driver-btsnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26673c9c198", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.536 [INFO][5362] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.536 [INFO][5362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" iface="eth0" netns="" Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.536 [INFO][5362] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.536 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.587 [INFO][5368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" HandleID="k8s-pod-network.146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.587 [INFO][5368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.588 [INFO][5368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.604 [WARNING][5368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" HandleID="k8s-pod-network.146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.607 [INFO][5368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" HandleID="k8s-pod-network.146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Workload="ci--4081--3--0--62f509289be87beba9d0.c.flatcar--212911.internal-k8s-csi--node--driver--btsnv-eth0" Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.610 [INFO][5368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:08.613900 containerd[1594]: 2025-01-17 12:23:08.612 [INFO][5362] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01" Jan 17 12:23:08.613900 containerd[1594]: time="2025-01-17T12:23:08.613465174Z" level=info msg="TearDown network for sandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\" successfully" Jan 17 12:23:08.620811 containerd[1594]: time="2025-01-17T12:23:08.619548632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:08.620811 containerd[1594]: time="2025-01-17T12:23:08.619650551Z" level=info msg="RemovePodSandbox \"146c7a6a7723ab831108816620e7379f3ed247c9d10a0d5348a8811c2128ce01\" returns successfully" Jan 17 12:23:09.733736 systemd[1]: Started sshd@8-10.128.0.73:22-139.178.89.65:60866.service - OpenSSH per-connection server daemon (139.178.89.65:60866). Jan 17 12:23:10.028758 sshd[5374]: Accepted publickey for core from 139.178.89.65 port 60866 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:10.030606 sshd[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:10.037327 systemd-logind[1570]: New session 9 of user core. Jan 17 12:23:10.042414 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:23:10.325399 sshd[5374]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:10.330724 systemd[1]: sshd@8-10.128.0.73:22-139.178.89.65:60866.service: Deactivated successfully. Jan 17 12:23:10.338034 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:23:10.339100 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:23:10.341076 systemd-logind[1570]: Removed session 9. Jan 17 12:23:13.976790 systemd[1]: run-containerd-runc-k8s.io-9a20c2957cea00426a00185be247cb7d23d2fce5ce6bc92ad03f55b2ad08743e-runc.lrqEUd.mount: Deactivated successfully. Jan 17 12:23:15.373247 systemd[1]: Started sshd@9-10.128.0.73:22-139.178.89.65:48622.service - OpenSSH per-connection server daemon (139.178.89.65:48622). Jan 17 12:23:15.664003 sshd[5416]: Accepted publickey for core from 139.178.89.65 port 48622 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:15.665992 sshd[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:15.672079 systemd-logind[1570]: New session 10 of user core. Jan 17 12:23:15.680387 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:23:15.957380 sshd[5416]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:15.962416 systemd[1]: sshd@9-10.128.0.73:22-139.178.89.65:48622.service: Deactivated successfully. Jan 17 12:23:15.970221 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:23:15.970292 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:23:15.972556 systemd-logind[1570]: Removed session 10. Jan 17 12:23:16.005630 systemd[1]: Started sshd@10-10.128.0.73:22-139.178.89.65:48626.service - OpenSSH per-connection server daemon (139.178.89.65:48626). Jan 17 12:23:16.299351 sshd[5431]: Accepted publickey for core from 139.178.89.65 port 48626 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:16.301461 sshd[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:16.308194 systemd-logind[1570]: New session 11 of user core. Jan 17 12:23:16.315308 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:23:16.630289 sshd[5431]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:16.637287 systemd[1]: sshd@10-10.128.0.73:22-139.178.89.65:48626.service: Deactivated successfully. Jan 17 12:23:16.644058 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:23:16.645937 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:23:16.648105 systemd-logind[1570]: Removed session 11. Jan 17 12:23:16.680230 systemd[1]: Started sshd@11-10.128.0.73:22-139.178.89.65:48628.service - OpenSSH per-connection server daemon (139.178.89.65:48628). Jan 17 12:23:16.982945 sshd[5443]: Accepted publickey for core from 139.178.89.65 port 48628 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:16.985318 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:16.992081 systemd-logind[1570]: New session 12 of user core. Jan 17 12:23:16.996422 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:23:17.276814 sshd[5443]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:17.282039 systemd[1]: sshd@11-10.128.0.73:22-139.178.89.65:48628.service: Deactivated successfully. Jan 17 12:23:17.289113 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:23:17.289795 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:23:17.291573 systemd-logind[1570]: Removed session 12. Jan 17 12:23:22.325186 systemd[1]: Started sshd@12-10.128.0.73:22-139.178.89.65:52178.service - OpenSSH per-connection server daemon (139.178.89.65:52178). Jan 17 12:23:22.612733 sshd[5461]: Accepted publickey for core from 139.178.89.65 port 52178 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:22.614566 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:22.620102 systemd-logind[1570]: New session 13 of user core. Jan 17 12:23:22.628719 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:23:22.902332 sshd[5461]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:22.907352 systemd[1]: sshd@12-10.128.0.73:22-139.178.89.65:52178.service: Deactivated successfully. Jan 17 12:23:22.915369 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:23:22.916674 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:23:22.918306 systemd-logind[1570]: Removed session 13. Jan 17 12:23:27.950186 systemd[1]: Started sshd@13-10.128.0.73:22-139.178.89.65:52184.service - OpenSSH per-connection server daemon (139.178.89.65:52184). Jan 17 12:23:28.239019 sshd[5479]: Accepted publickey for core from 139.178.89.65 port 52184 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:28.241275 sshd[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:28.246942 systemd-logind[1570]: New session 14 of user core. Jan 17 12:23:28.252600 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:23:28.530584 sshd[5479]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:28.535939 systemd[1]: sshd@13-10.128.0.73:22-139.178.89.65:52184.service: Deactivated successfully. Jan 17 12:23:28.536571 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:23:28.542577 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:23:28.545245 systemd-logind[1570]: Removed session 14. Jan 17 12:23:33.585421 systemd[1]: Started sshd@14-10.128.0.73:22-139.178.89.65:55492.service - OpenSSH per-connection server daemon (139.178.89.65:55492). Jan 17 12:23:33.889514 sshd[5513]: Accepted publickey for core from 139.178.89.65 port 55492 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:33.891420 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:33.898014 systemd-logind[1570]: New session 15 of user core. Jan 17 12:23:33.904369 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:23:34.199273 sshd[5513]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:34.205659 systemd[1]: sshd@14-10.128.0.73:22-139.178.89.65:55492.service: Deactivated successfully. Jan 17 12:23:34.211104 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:23:34.211841 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:23:34.214045 systemd-logind[1570]: Removed session 15. Jan 17 12:23:39.246922 systemd[1]: Started sshd@15-10.128.0.73:22-139.178.89.65:55502.service - OpenSSH per-connection server daemon (139.178.89.65:55502). Jan 17 12:23:39.541000 sshd[5533]: Accepted publickey for core from 139.178.89.65 port 55502 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:39.542754 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:39.548612 systemd-logind[1570]: New session 16 of user core. Jan 17 12:23:39.554433 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:23:39.833725 sshd[5533]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:39.840598 systemd[1]: sshd@15-10.128.0.73:22-139.178.89.65:55502.service: Deactivated successfully. Jan 17 12:23:39.846176 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:23:39.847266 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:23:39.848707 systemd-logind[1570]: Removed session 16. Jan 17 12:23:39.883864 systemd[1]: Started sshd@16-10.128.0.73:22-139.178.89.65:55518.service - OpenSSH per-connection server daemon (139.178.89.65:55518). Jan 17 12:23:40.177219 sshd[5547]: Accepted publickey for core from 139.178.89.65 port 55518 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:40.179093 sshd[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:40.184933 systemd-logind[1570]: New session 17 of user core. Jan 17 12:23:40.191582 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:23:40.542028 sshd[5547]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:40.546987 systemd[1]: sshd@16-10.128.0.73:22-139.178.89.65:55518.service: Deactivated successfully. Jan 17 12:23:40.554617 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:23:40.554905 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:23:40.557542 systemd-logind[1570]: Removed session 17. Jan 17 12:23:40.591245 systemd[1]: Started sshd@17-10.128.0.73:22-139.178.89.65:55530.service - OpenSSH per-connection server daemon (139.178.89.65:55530). Jan 17 12:23:40.881023 sshd[5559]: Accepted publickey for core from 139.178.89.65 port 55530 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:40.883094 sshd[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:40.889901 systemd-logind[1570]: New session 18 of user core. Jan 17 12:23:40.898271 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:23:43.014998 sshd[5559]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:43.021197 systemd[1]: sshd@17-10.128.0.73:22-139.178.89.65:55530.service: Deactivated successfully. Jan 17 12:23:43.027912 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:23:43.031037 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:23:43.035502 systemd-logind[1570]: Removed session 18. Jan 17 12:23:43.060202 systemd[1]: Started sshd@18-10.128.0.73:22-139.178.89.65:49616.service - OpenSSH per-connection server daemon (139.178.89.65:49616). Jan 17 12:23:43.355740 sshd[5580]: Accepted publickey for core from 139.178.89.65 port 49616 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:43.357845 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:43.364894 systemd-logind[1570]: New session 19 of user core. Jan 17 12:23:43.373329 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:23:43.795094 sshd[5580]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:43.801041 systemd[1]: sshd@18-10.128.0.73:22-139.178.89.65:49616.service: Deactivated successfully. Jan 17 12:23:43.807319 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:23:43.808075 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:23:43.810093 systemd-logind[1570]: Removed session 19. Jan 17 12:23:43.843315 systemd[1]: Started sshd@19-10.128.0.73:22-139.178.89.65:49630.service - OpenSSH per-connection server daemon (139.178.89.65:49630). Jan 17 12:23:44.142472 sshd[5592]: Accepted publickey for core from 139.178.89.65 port 49630 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:44.144471 sshd[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:44.151069 systemd-logind[1570]: New session 20 of user core. Jan 17 12:23:44.155119 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:23:44.430613 sshd[5592]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:44.436764 systemd[1]: sshd@19-10.128.0.73:22-139.178.89.65:49630.service: Deactivated successfully. Jan 17 12:23:44.442765 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:23:44.443578 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:23:44.445881 systemd-logind[1570]: Removed session 20. Jan 17 12:23:49.479591 systemd[1]: Started sshd@20-10.128.0.73:22-139.178.89.65:49632.service - OpenSSH per-connection server daemon (139.178.89.65:49632). Jan 17 12:23:49.776363 sshd[5625]: Accepted publickey for core from 139.178.89.65 port 49632 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:49.778443 sshd[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:49.784253 systemd-logind[1570]: New session 21 of user core. Jan 17 12:23:49.788162 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:23:50.066939 sshd[5625]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:50.072052 systemd[1]: sshd@20-10.128.0.73:22-139.178.89.65:49632.service: Deactivated successfully. Jan 17 12:23:50.078631 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:23:50.079316 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:23:50.081648 systemd-logind[1570]: Removed session 21. Jan 17 12:23:55.116756 systemd[1]: Started sshd@21-10.128.0.73:22-139.178.89.65:41608.service - OpenSSH per-connection server daemon (139.178.89.65:41608). Jan 17 12:23:55.406227 sshd[5644]: Accepted publickey for core from 139.178.89.65 port 41608 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:55.408181 sshd[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:55.414503 systemd-logind[1570]: New session 22 of user core. Jan 17 12:23:55.418145 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:23:55.697094 sshd[5644]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:55.702280 systemd[1]: sshd@21-10.128.0.73:22-139.178.89.65:41608.service: Deactivated successfully. Jan 17 12:23:55.709632 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:23:55.710402 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:23:55.712509 systemd-logind[1570]: Removed session 22. Jan 17 12:24:00.755241 systemd[1]: Started sshd@22-10.128.0.73:22-139.178.89.65:41624.service - OpenSSH per-connection server daemon (139.178.89.65:41624). Jan 17 12:24:01.091903 sshd[5699]: Accepted publickey for core from 139.178.89.65 port 41624 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:24:01.092333 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:01.099515 systemd-logind[1570]: New session 23 of user core. Jan 17 12:24:01.104549 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:24:01.425241 sshd[5699]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:01.434502 systemd[1]: sshd@22-10.128.0.73:22-139.178.89.65:41624.service: Deactivated successfully. Jan 17 12:24:01.440891 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:24:01.442222 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:24:01.443752 systemd-logind[1570]: Removed session 23. Jan 17 12:24:06.476348 systemd[1]: Started sshd@23-10.128.0.73:22-139.178.89.65:52692.service - OpenSSH per-connection server daemon (139.178.89.65:52692). Jan 17 12:24:06.766605 sshd[5716]: Accepted publickey for core from 139.178.89.65 port 52692 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:24:06.769430 sshd[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:06.779607 systemd-logind[1570]: New session 24 of user core. Jan 17 12:24:06.787569 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:24:07.105262 sshd[5716]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:07.112468 systemd[1]: sshd@23-10.128.0.73:22-139.178.89.65:52692.service: Deactivated successfully. Jan 17 12:24:07.118901 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:24:07.121524 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:24:07.124036 systemd-logind[1570]: Removed session 24.